针对mdadm的RAID1失效测试
背景
对软RAID(mdadm)方式进行各个场景失效测试。
一、初始信息
内核版本:
root@omv30:~# uname -a
Linux omv30 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.6-1~bpo9+1 (2018-09-13) x86_64 GNU/Linux
root@omv30:~# mdadm --version
mdadm - v3.4 - 28th January 2016
使用omv创建RAID1之后,查询sdb的信息,此时sdb对应的是8ac693c5的UUID,device号为1:
root@omv30:~# mdadm --query /dev/sdb
/dev/sdb: is not an md array
/dev/sdb: device 1 in 2 device undetected raid1 /dev/md0. Use mdadm --examine for more detail.
root@omv30:~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Name : omv30:raid1 (local to host omv30)
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1960 sectors, after=0 sectors
State : clean
Device UUID : 64a58fb5:c7e76b1a:29453878:8ac693c5
Update Time : Mon Oct 1 13:20:56 2018
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 2e1fb65b - correct
Events : 21
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 19:46:42 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 25
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 16 1 active sync /dev/sdb
配置文件信息:
root@omv30:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=omv30:raid1 UUID=921a8946:b273e00e:3fa4b99d:040a4437
二、盘符错乱测试
1、盘符交换测试
在VirtualBox的存储-SATA下,分别选中两块硬盘,在右边的属性将SATA端口调换位置,即可交换盘符。
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 19:52:46 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 29
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
可以看到盘符已经换了。
root@omv30:~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Name : omv30:raid1 (local to host omv30)
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1960 sectors, after=0 sectors
State : clean
Device UUID : 6e545465:3dcf10df:1d5bb938:fe840307
Update Time : Mon Oct 1 19:52:46 2018
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : a47a7d1f - correct
Events : 29
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid1 sdb[0] sdc[1]
1047552 blocks super 1.2 [2/2] [UU]
unused devices: <none>
root@omv30:~# fdisk -l
...(省略)
Disk /dev/md0: 1023 MiB, 1072693248 bytes, 2095104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
mount后访问正常。
2、更换硬盘位置测试
关机,新增一块硬盘,占用原来sdc的位置,启动:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 19:52:46 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 29
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdd
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid1 sdb[0] sdd[1]
1047552 blocks super 1.2 [2/2] [UU]
unused devices: <none>
原来的sdc变成了sdc,raid1还完好保存,不受影响。
3、移除硬盘测试
关机,然后移除一块硬盘,启动:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 29
Number Major Minor RaidDevice
- 8 16 - /dev/sdb
root@omv30:~# mdadm --query /dev/sdb
/dev/sdb: is not an md array
/dev/sdb: device 0 in 2 device undetected raid1 /dev/md0. Use mdadm --examine for more detail.
root@omv30:~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Name : omv30:raid1 (local to host omv30)
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1960 sectors, after=0 sectors
State : clean
Device UUID : 6e545465:3dcf10df:1d5bb938:fe840307
Update Time : Mon Oct 1 19:52:46 2018
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : a47a7d1f - correct
Events : 29
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@omv30:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[0](S)
1047552 blocks super 1.2
unused devices: <none>
root@omv30:~# fdisk -l
Disk /dev/sda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8c9b0fb9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 12582911 12580864 6G 83 Linux
/dev/sda2 12584958 16775167 4190210 2G 5 Extended
/dev/sda5 12584960 16775167 4190208 2G 82 Linux swap / Solaris
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
raid1变成了inactive,但raid信息本身是保存在磁盘中,不会丢失。
4、结论
- mdadm不是根据盘符/dev/sdx来记录RAID信息的,盘符无论怎么变换,RAID信息不错乱。
- mdadm是使用Device UUID来区分硬盘的。与RAID硬盘盒不一样,硬盘盒是记录硬盘槽位号的。
- 所以mdadm每个硬盘可以使用任意硬盘盒,不用记录位置。
三、RAID降级恢复测试
场景:正常运行的RAID1,突然一块盘失效,进行重建恢复。
方法:可以用模拟fail的方式,也可以用VirtualBox热插拔硬盘的功能。
1、模拟fail的方式
初始信息如下:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:29:44 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 31
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdd
(1)手工fail掉sdd:
root@omv30:~# mdadm /dev/md0 --fail /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:29:59 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 33
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
- 0 0 1 removed
1 8 48 - faulty /dev/sdd
如果未移除,要先移除损坏的硬盘:
root@omv30:~# mdadm /dev/md0 -r /dev/sdd
mdadm: hot remove failed for /dev/sdd: No such device or address
(2)增加新盘:
root@omv30:~# mdadm /dev/md0 --add /dev/sdc (新加的盘是2G的,经实测不影响RAID1重建)
mdadm: added /dev/sdc
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:36:22 2018
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 76% complete
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 48
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
2 8 32 1 spare rebuilding /dev/sdc
可以看到正在重建。
过一会儿再执行:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 20:36:24 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 53
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
2 8 32 1 active sync /dev/sdc
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[2] sdb[0]
1047552 blocks super 1.2 [2/2] [UU]
unused devices: <none>
已经重建成功。
2、热插拔硬盘测试
在VirtualBox的存储中,勾选sdb的热插拔,然后启动。
系统在运行时,到VirtualBox的存储中移除sdb硬盘,然后查看状态:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Mon Oct 1 21:13:45 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 56
Number Major Minor RaidDevice State
- 0 0 0 removed
2 8 32 1 active sync /dev/sdc
查看系统日志,发现有硬盘离线,并且RAID1降级(有日志跟踪是软RAID的一个优势):
root@omv30:/var/log# dmesg | tail -20
[ 340.551533] md: recovery of RAID array md0
[ 345.881625] md: md0: recovery done.
[ 657.932091] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[ 2571.324851] ata2: SATA link down (SStatus 0 SControl 300)
[ 2576.667851] ata2: SATA link down (SStatus 0 SControl 300)
[ 2582.044796] ata2: SATA link down (SStatus 0 SControl 300)
[ 2582.044864] ata2.00: disabled
[ 2582.045573] ata2.00: detaching (SCSI 3:0:0:0)
[ 2582.058467] sd 3:0:0:0: [sdb] Synchronizing SCSI cache
[ 2582.058528] sd 3:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 2582.058536] sd 3:0:0:0: [sdb] Stopping disk
[ 2582.058552] sd 3:0:0:0: [sdb] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 2586.709149] md/raid1:md0: Disk failure on sdb, disabling device.
md/raid1:md0: Operation continuing on 1 devices.
加入新硬盘。新加的硬盘不能比现有的小,可以比现在的大。
root@omv30:~# mdadm /dev/md0 --add /dev/sdc
开始重建:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 21:38:36 2018
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 56% complete
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 69
Number Major Minor RaidDevice State
3 8 32 0 spare rebuilding /dev/sdc
2 8 16 1 active sync /dev/sdb
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[3] sdb[2]
1047552 blocks super 1.2 [2/1] [_U]
[===================>.] recovery = 95.0% (996288/1047552) finish=0.0min speed=249072K/sec
unused devices: <none>
重建完成:
root@omv30:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 21:38:39 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : omv30:raid1 (local to host omv30)
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 78
Number Major Minor RaidDevice State
3 8 32 0 active sync /dev/sdc
2 8 16 1 active sync /dev/sdb
root@omv30:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[3] sdb[2]
1047552 blocks super 1.2 [2/2] [UU]
unused devices: <none>
四、重装系统恢复RAID1测试
1、RAID1两块硬盘正常
将两块硬盘挂到一个新装的ubuntu中,启动。
root@UB13:/home/op# fdisk -l
...(省略)
Disk /dev/md127:1023 MiB,1072693248 字节,2095104 个扇区
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
root@UB13:/home/op# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdd[1] sdc[0]
1047552 blocks super 1.2 [2/2] [UU]
unused devices: <none>
可以看到,自动识别出来/dev/md127。
root@UB13:/mnt/md# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Oct 1 19:31:08 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : omv30:raid1
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 25
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
系统自动识别并恢复了RAID1,不用执行mdadm --assemble --scan。
查看/etc/mdadm/mdadm.conf,也是自动加入了md信息。
如果没有自动生成设备信息,则执行:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u
mount后访问正常。
至于如何将md127修改成md0,详见下一节。
2、RAID1损坏了一块硬盘
只有一块硬盘可用,需要在新系统上重建RAID1。
root@op:/home/op# fdisk -l
(未发现新的md设备,略去详细输出)
root@op:/home/op# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Name : omv30:raid1
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 4192256 (2047.00 MiB 2146.44 MB)
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 2095104 (1023.00 MiB 1072.69 MB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1960 sectors, after=2097152 sectors
State : clean
Device UUID : 6e2fc709:35a8f6fb:d4c0e242:6905437d
Update Time : Mon Oct 1 21:40:25 2018
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e65b30a8 - correct
Events : 78
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@op:/home/op# cat /proc/mdstat
Personalities :
unused devices: <none>
说明新挂载的盘RAID1信息犹在,只需要重新识别:
root@op:/home/op# mdadm --assemble --scan
mdadm: /dev/md/raid1 has been started with 1 drive (out of 2).
root@op:/home/op# fdisk -l
...(省略)
Disk /dev/md127:1023 MiB,1072693248 字节,2095104 个扇区
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
root@op:/home/op# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Sep 30 22:31:39 2018
Raid Level : raid1
Array Size : 1047552 (1023.00 MiB 1072.69 MB)
Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Mon Oct 1 21:40:25 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : omv30:raid1
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 78
Number Major Minor RaidDevice State
- 0 0 0 removed
2 8 16 1 active sync /dev/sdb
加入新硬盘。新加的硬盘不能比现有的小,可以比现在的大。
先要拷贝分区表:
sfdisk -d /dev/sdb | sfdisk /dev/sdc
root@omv30:~# mdadm /dev/md127 --add /dev/sdc
等待同步完成。
查看/etc/mdadm/mdadm.conf,如果没有自动生成设备信息,则执行:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
如果这个时候重启系统,会发现fdisk -l中没有md127了。并且mdadm -D发现RAID1处于inactive。
root@op:/home/op# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Working Devices : 1
Name : omv30:raid1
UUID : 921a8946:b273e00e:3fa4b99d:040a4437
Events : 78
Number Major Minor RaidDevice
- 8 16 - /dev/sdb
这时需要先mdadm -S /dev/md127删除旧的md127,
再重新mdadm --assemble --scan。
再add硬盘,完成重建即可。
root@omv30:~# mdadm /dev/md0 --add /dev/sdc
这时再重启也不会有影响,唯一的变化就是在之前主机是md0,现在变成了md127了。
解决方法:
修改/etc/mdadm/mdadm.conf,
把第二列:/dev/md/raid1
修改成:/dev/md0
再执行:
update-initramfs -u
重启,搞定。
五、常用命令
查看状态
mdadm -D /dev/md0
cat /proc/mdstat
mdadm --examine /dev/sdb
mdadm --detail --scan
删除md
mdadm -S /dev/md0
激活md
mdadm -A /dev/md0
在新OS上重新导入raid
mdadm --assemble --scan
重建
mdadm /dev/md0 --add /dev/sdc
针对mdadm的RAID1失效测试的更多相关文章
- Linux磁盘分区与文件系统
一 Linux磁盘分区与文件系统 在Linux中常见的操作系统有:ext2 ext3 ext4 xfs btrfs reiserfs等文件系统的作用主要是明确磁盘或分区上的文件存储方法以及数据结构,L ...
- linux raid10管理维护
http://www.linuxidc.com/Linux/2015-10/124391.htm 制作raid10 http://www.linuxidc.com/Linux/2015-09/1 ...
- [转]在 Linux 下使用 RAID
转自:http://www.linuxidc.com/Linux/2015-08/122191.htm RAID 的意思是廉价磁盘冗余阵列(Redundant Array of Inexpensive ...
- mdadm命令详解
创建阵列(-C或--create) --raid-devices(-n) 功能:指定阵列中成员盘个数. 举例:mdadm --create /dev/md0 -l5 -n2 /dev/sdb /dev ...
- RAID磁盘阵列的搭建(以raid0、raid1、raid5、raid10为例)
mdadm工具的使用 -C或--creat 建立一个新阵列 -r 移除设备 -A 激活磁盘阵列 -l 或--level= 设定磁盘阵列的级别 -D或--detail 打印阵列设备的详细信息 -n或-- ...
- 专题:mdadm Raid & LVM
>FOR FREEDOM!< {A} Introduction Here's a short description of what is supported in the Linux R ...
- Raid1源代码分析--同步流程
同步的大流程是先读,后写.所以是分两个阶段,sync_request完成第一个阶段,sync_request_write完成第二个阶段.第一个阶段由MD发起(md_do_sync),第二个阶段由守护进 ...
- RAID,mdadm(笔记)
RAID: 级别:仅代表磁盘组织方式不同,没有上下之分:0: 条带 性能提升: 读,写 冗余能力(容错能力): 无 空间利用率:nS 至少2块盘1: 镜像 性能表现:写性 ...
- mdadm命令详解及实验过程
一.概念 mdadm是multiple devices admin的简称,它是Linux下的一款标准的软件 RAID 管理工具,作者是Neil Brown 二.特点 mdadm能够诊断.监控和收集详细 ...
随机推荐
- TCP传输工作原理
引言 在TCP/IP体系结构中,IP协议只管将数据包尽力传送到目的主机,无论数据传输正确与否,它都不做验证,不发确认,也不保证数据包的顺序,因而不具有可靠性.这一问题要由传输层TCP协议来解决,TCP ...
- postgis经常使用函数介绍(一)
概述: 在进行地理信息系统开发的过程中,经常使用的空间数据库有esri的sde,postgres的postgis以及mySQL的mysql gis等等,在本文.给大家介绍的是有关postgis的一些经 ...
- Fiddler抓取https请求,解决“证书错误”警告
要抓取走HTTPS内容,Fiddler必须解密HTTPS流量. 但是,浏览器将会检查数字证书,并发现会话遭到窃听.为了骗过浏览 器,Fiddler通过使用另一个数字证书重新加密HTTPS流量. Fid ...
- Android 触摸提示音【转】
本文转载自:http://blog.csdn.net/Jin_HeZai/article/details/46791567 近期任务,涉及Android触摸提示音. 首先,定位源码目标.很显然的,在原 ...
- touch事件的分发机制
作者:谢昆 一段伪代码反应整个touch事件的分发 public boolean dispatchTouchEvent(MotionEvent event) { boolean consume = f ...
- Bing Maps进阶系列七:Bing Maps功能导航菜单华丽的变身
Bing Maps进阶系列七:Bing Maps功能导航菜单华丽的变身 Bing Maps Silverlight Control所提供的功能导航是非常强大的,在设计上对扩展的支持非常好,提供了许多用 ...
- P1402 酒店之王 网络流
大水题,我自己瞎做就做出来了,没啥说的,zz建图,就是板子. 题干: 题目描述 XX酒店的老板想成为酒店之王,本着这种希望,第一步要将酒店变得人性化.由于很多来住店的旅客有自己喜好的房间色调.阳光等, ...
- POJ2492 A Bug's Life 判断二分图
给一个无向图,判断是否为二分图 是否有奇环.用染色法解决,也可以用并查集解决,时间复杂度是相同的,但是并查集用的内存少. 这里给出Bfs染色的代码 #include<iostream> # ...
- PCB genesis连孔加除毛刺孔(槽孔与槽孔)实现方法(三)
一.为什么 连孔加除毛刺孔 原因是 PCB板材中含有玻璃纤维, 毛刺产生位置在于2个孔相交位置,由于此处钻刀受力不均导致纤维切削不断形成毛刺 ,为了解决这个问题:在钻完2个连孔后,在相交处再钻一个孔, ...
- VB.NET学习体会
注:本文写于2018年01月28日,首先发表于CSDN博客"aopstudio的博客"上 下学期要学习VB.NET程序设计课程,这几天在家开始自习.在自习的过程中发现VB.NET和 ...