标题说明了一切。 我在Ubuntu上有3个分区mdadm raid 5。 现在我添加了第4个分区。 所有分区覆盖整个物理磁盘,每个磁盘的大小为4 TB。 正在使用的文件系统是ext4。在mdadm raid 5成长后,我是否必须等待重塑以调用resize2fs?
mdadm的生长RAID 5维基后说,作出fsck检查,然后调整
我必须等待mdadm的重塑完成的resize2fs工作? 如果不是这似乎并不奏效。 当然raid是unmounted的。 我做了fsck -f检查。 在raid 上运行resize2fs,但mdadm -D/dev/md0仍显示4个阵列大小为8 TB且开发大小为4 TB的磁盘。 fdisk -l也只显示8 TB的大小。
我做错了什么? 如何重新调整文件系统以包含我的第4张磁盘?
的fdisk - 输出:
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes/4096 bytes
I/O size (minimum/optimal): 4096 bytes/4096 bytes
Disklabel type: gpt
Disk identifier: E434C200-63C3-4CB8-8097-DD369155D797
Device Start End Sectors Size Type
/dev/sdb1 2048 7814035455 7814033408 3.7T Linux RAID
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes/4096 bytes
I/O size (minimum/optimal): 4096 bytes/4096 bytes
Disklabel type: gpt
Disk identifier: CAD9D9DA-1C3B-4FC2-95BD-A5F0B55DE313
Device Start End Sectors Size Type
/dev/sdd1 2048 7814035455 7814033408 3.7T Linux RAID
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes/4096 bytes
I/O size (minimum/optimal): 4096 bytes/4096 bytes
Disklabel type: gpt
Disk identifier: 090E94F6-A461-4B27-BCDF-5B49EC6AFC84
Device Start End Sectors Size Type
/dev/sdc1 2048 7814035455 7814033408 3.7T Linux RAID
Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes/4096 bytes
I/O size (minimum/optimal): 4096 bytes/4096 bytes
Disklabel type: gpt
Disk identifier: A0285071-2C16-42B4-8567-32CE98147A93
Device Start End Sectors Size Type
/dev/sde1 2048 7814035455 7814033408 3.7T Linux RAID
Disk /dev/md0: 7.3 TiB, 8001301774336 bytes, 15627542528 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes/4096 bytes
I/O size (minimum/optimal): 524288 bytes/1572864 bytes
的mdadm -D的/ dev/md0的输出
/dev/md0:
Version : 1.2
Creation Time : Thu Jan 7 08:23:57 2016
Raid Level : raid5
Array Size : 7813771264 (7451.79 GiB 8001.30 GB)
Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Jan 11 08:10:47 2016
State : clean, reshaping
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Reshape Status : 47% complete
Delta Devices : 1, (3->4)
Name : NAS:0 (local to host NAS)
UUID : 69ba4b0e:a2427b2a:121cc4e0:5461a8fb
Events : 10230
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
3 8 65 2 active sync /dev/sde1
4 8 17 3 active sync /dev/sdb1