|
楼主 |
发表于 2009-5-1 00:06:24
|
显示全部楼层
本帖最后由 zongyongchun 于 2009-5-1 00:35 编辑
先不考虑现实中热插拔硬盘的支持部分。
实验完毕,贴一下:
1.创建4个100MB和1个50MB的文件,预备作为虚拟磁盘。
[root@node02 ~]# dd if=/dev/zero of=01.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.202768 seconds, 517 MB/s
[root@node02 ~]# dd if=/dev/zero of=02.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.833963 seconds, 126 MB/s
[root@node02 ~]# dd if=/dev/zero of=03.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.20868 seconds, 502 MB/s
[root@node02 ~]# dd if=/dev/zero of=04.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.217478 seconds, 482 MB/s
[root@node02 ~]# dd if=/dev/zero of=05.img bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 0.109192 seconds, 480 MB/s
2.
[root@node02 ~]# losetup /dev/loop0 01.img
[root@node02 ~]# losetup /dev/loop1 02.img
[root@node02 ~]# losetup /dev/loop2 03.img
[root@node02 ~]# losetup /dev/loop3 04.img
[root@node02 ~]# losetup /dev/loop4 05.img
3.用3个100MB和一个50MB的“盘”做一个RAID5,模拟成QNAP上面扩容的第4幅图的场景。这时的RAID大小为3*50MB = 150MB大小的盘。
[root@node02 ~]# mdadm -CR /dev/md0 -l5 -n4 /dev/loop[0-2] /dev/loop4 --assume-clean (-b internal)(bitmap模式)
mdadm: largest drive (/dev/loop0) exceed size (51136K) by more than 1%
mdadm: array /dev/md0 started.
[root@node02 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 loop4[3] loop2[2] loop1[1] loop0[0]
153408 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
[root@node02 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Apr 30 21:15:09 2009
Raid Level : raid5
Array Size : 153408 (149.84 MiB 157.09 MB)
Used Dev Size : 51136 (49.95 MiB 52.36 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Apr 30 21:15:09 2009
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 8194e2d6:843e2352:c335957f:1051c266
Events : 0.1
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 4 3 active sync /dev/loop4
4.模拟换最后一块100MB的盘(即删除50MB,加入100MB的盘)。
[root@node02 ~]# mdadm -f /dev/md0 /dev/loop4
mdadm: set /dev/loop4 faulty in /dev/md0
[root@node02 ~]# mdadm -r /dev/md0 /dev/loop4
mdadm: hot removed /dev/loop4
[root@node02 ~]# mdadm -a /dev/md0 /dev/loop3
mdadm: added /dev/loop3
这时会有同步过程发生,同步完之前是不接受扩容命令的。这时的容量仍是150MB。
[root@node02 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Apr 30 21:15:09 2009
Raid Level : raid5
Array Size : 153408 (149.84 MiB 157.09 MB)
Used Dev Size : 51136 (49.95 MiB 52.36 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Apr 30 21:16:29 2009
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 54% complete
UUID : 8194e2d6:843e2352:c335957f:1051c266
Events : 0.4
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
4 7 3 3 spare rebuilding /dev/loop3
5.最后一步,扩容成功,300MB。(这里会有一个区别,如果不是bitmap模式,又会有一个同步过程;如果是bitmap模式,完全不需要同步。)(使用bitmap模式,在RAID突然中断,但其中的盘没有损坏,后面又加入阵列时不需要完全同步,只需要增量同步。而bitmap不是默认的,个人觉得应该使用bitmap模式。)
[root@node02 ~]# mdadm -G /dev/md0 -z max
[root@node02 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Apr 30 21:15:09 2009
Raid Level : raid5
Array Size : 307008 (299.86 MiB 314.38 MB)
Used Dev Size : 102336 (99.95 MiB 104.79 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Apr 30 21:17:29 2009
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 8194e2d6:843e2352:c335957f:1051c266
Events : 0.8
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 3 3 active sync /dev/loop3
逻辑卷、文件系统的在线扩容在这里省略不写。 |
|