群晖系统盘md0扩容
0x01 前言
博主的一台黑群晖是从 DSM 6 一路升级到 DSM 7.2 的。最近 DSM 7.3 发布,本想顺手升级,结果系统升级却一直报错提示“系统空间不足”。
SSH 登录进系统后,用 df -h 一看,发现根目录 /dev/md0(群晖的系统盘)使用率已经爆满了,扩容根目录可能需要动分区表;而且群晖的磁盘结构中,md0(系统分区)后面紧接着就是 md1(Swap 交换分区),这就意味着物理扇区是连续占用的,强行调整分区边界风险极大,理论上非常难扩容。
但是在网上突然看到一篇文章:群晖md0分区无损扩容教程,用于解决从DSM6升级到7后md0分区大小仍然是2.4GB 我发现了一个“意外惊喜”:其实底层物理磁盘的系统分区(p1)已经被分配了 8GB 的空间,只是上层的 RAID 阵列(md0)依然维持在 DSM 6 时代的 2.4GB 大小! 也就是说,我们根本不需要冒着风险去改动底层的物理分区表,只需要把上层限制了大小的 RAID 阵列和文件系统“释放”出来,填满那闲置的 8GB 空间即可。
0x02 扩容
可以看到md0 群晖系统盘 只有2898114624 blocks(约等于 2.4GB)成员组是sata2p1[12] sata6p1[16] sata5p1[13] sata4p1[15] sata3p1[14]
root@SA6400:/# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sata2p3[4] sata5p3[5] sata4p3[2] sata3p3[1]
2898114624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md3 : active raid1 sata6p3[0]
5849798208 blocks super 1.2 [1/1] [U]
md1 : active raid1 sata2p2[12] sata6p2[16] sata4p2[15] sata5p2[13] sata3p2[14]
2096128 blocks super 1.2 [12/5] [UUUUU_______]
md0 : active raid1 sata2p1[12] sata6p1[16] sata5p1[13] sata4p1[15] sata3p1[14]
2489216 blocks super 1.2 [12/5] [UUUUU_______]
unused devices: <none>
查看所有物理设备,虽然每个物理分区 p1 确实是 8GB(16,777,216 sectors × 512 bytes),但 RAID 阵列没有使用分区的全部空间。
# 可以看到 sata2p1 sata6p1 sata5p1 sata4p1 sata3p1 其实都是8G
root@SA6400:/# fdisk -l /dev/sata?
Disk /dev/sata2: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP102
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x41f2b7ea
Device Boot Start End Sectors Size Id Type
/dev/sata2p1 8192 16785407 16777216 8G fd Linux raid autodetect
/dev/sata2p2 16785408 20979711 4194304 2G fd Linux raid autodetect
/dev/sata2p3 21241856 1953320351 1932078496 921.3G fd Linux raid autodetect
Disk /dev/sata3: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP102
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xedcbbbac
Device Boot Start End Sectors Size Id Type
/dev/sata3p1 8192 16785407 16777216 8G fd Linux raid autodetect
/dev/sata3p2 16785408 20979711 4194304 2G fd Linux raid autodetect
/dev/sata3p3 21241856 1953320351 1932078496 921.3G fd Linux raid autodetect
Disk /dev/sata4: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP102
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xbdcf33c5
Device Boot Start End Sectors Size Id Type
/dev/sata4p1 8192 16785407 16777216 8G fd Linux raid autodetect
/dev/sata4p2 16785408 20979711 4194304 2G fd Linux raid autodetect
/dev/sata4p3 21241856 1953320351 1932078496 921.3G fd Linux raid autodetect
Disk /dev/sata5: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP102
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xbcd081e2
Device Boot Start End Sectors Size Id Type
/dev/sata5p1 8192 16785407 16777216 8G fd Linux raid autodetect
/dev/sata5p2 16785408 20979711 4194304 2G fd Linux raid autodetect
/dev/sata5p3 21241856 1953320351 1932078496 921.3G fd Linux raid autodetect
Disk /dev/sata6: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: HUS726T6TALE6L4
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C9209603-1534-426C-9CFA-9BEFFE39C89F
Device Start End Sectors Size Type
/dev/sata6p1 8192 16785407 16777216 8G Linux RAID
/dev/sata6p2 16785408 20979711 4194304 2G Linux RAID
/dev/sata6p3 21241856 11720840351 11699598496 5.5T Linux RAID
那就直接开始扩容
# 调整md0到最大可用容量
root@SA6400:/# mdadm --grow /dev/md0 --size=max
mdadm: component size of /dev/md0 has been set to 8387584K
# 扩容文件系统
root@SA6400:/# resize2fs /dev/md0
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/md0 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/md0 is now 2096896 (4k) blocks long.
# 可以看到已经生效, 现在 8,387,584 blocks (~8GB) ,等 resync 完成后检查
root@SA6400:/# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sata2p3[4] sata5p3[5] sata4p3[2] sata3p3[1]
2898114624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md3 : active raid1 sata6p3[0]
5849798208 blocks super 1.2 [1/1] [U]
md1 : active raid1 sata2p2[12] sata6p2[16] sata4p2[15] sata5p2[13] sata3p2[14]
2096128 blocks super 1.2 [12/5] [UUUUU_______]
md0 : active raid1 sata2p1[12] sata6p1[16] sata5p1[13] sata4p1[15] sata3p1[14]
8387584 blocks super 1.2 [12/5] [UUUUU_______]
[==========>..........] resync = 50.4% (4235392/8387584) finish=0.8min speed=79371K/sec
unused devices: <none>
# 完成
root@SA6400:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.9G 1.6G 5.9G 22% /黑群晖 rr DSM7 DSM6 mdadm 群晖 md0 扩容 系统盘