群晖DS423+去年购入,同时1条梵想S690 m.2固态硬盘手动改成了存储盘,安装了群晖套件、docker、存放homes等最常用软件和数据。
系统一直是DSM 7.1,别的没什么,但docker没法搜索注册表,都是手动在ssh下pull,不太方便,准备升级到7.2。
梵想P761自带散热片
618购入1条梵想P761,也是2TB,准备与S690组成RAID1。
思路如下:
- 添加P761,将m2组成RAID1;
- 拆下1条,然后升级DSM7.1到7.2;
- 如果升级后不认了换上去就不折腾了;
- 如果升级顺利,后期可考虑拆分RAID1。
m2安装后,重启系统自动提示有可用硬盘,但是不能创建存储池:
也不能更改RAID类型:
因为已经经历过,这是意料之中的,需要手动操作。网上有一种脚本的方式,比较傻瓜化,但我还是喜欢命令行方式,操作步骤如下:
1、登录ssh并切换到root权限;
2、查看nvme设备
#ls /dev/nvme*
/dev/nvme0 /dev/nvme0n1 /dev/nvme0n1p1 /dev/nvme0n1p2 /dev/nvme0n1p3 /dev/nvme1 /dev/nvme1n1
其中/dev/nvme0是去年的S690,已经有分区了,而/dev/nvme1是新加的P761。
3、查看磁盘信息
# fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Fanxiang P761 2TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
字节数、扇区数与S690一摸一样。这太好了,一点儿都不影响组RAID1。
4、在第二块SSD上创建分区
# synopartition --part /dev/nvme1n1 12
Device Sectors (Version7: SupportRaid)
/dev/nvme1n11 4980480 (2431 MB)
/dev/nvme1n12 4194304 (2048 MB)
Reserved size: 262144 ( 128 MB)
Primary data partition will be created.
WARNING: This action will erase all data on '/dev/nvme1n1' and repart it, are you sure to continue? [y/N] y
Cleaning all partitions...
Creating sys partitions...
Creating primary data partition...
Please remember to mdadm and mkfs new partitions.
5、查看第二块SSD的分区布局
# fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Fanxiang P761 2TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfb8b4581
Device Boot Start End Sectors Size Id Type
/dev/nvme1n1p1 256 4980735 4980480 2.4G fd Linux raid autodetect
/dev/nvme1n1p2 4980736 9175039 4194304 2G fd Linux raid autodetect
/dev/nvme1n1p3 9437184 3907024064 3897586881 1.8T fd Linux raid autodetect
6、查看当前储存池情况
# cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sata1p3[0]
15615155200 blocks super 1.2 [1/1] [U]
md2 : active raid1 sata4p3[1]
3896294208 blocks super 1.2 [1/1] [U]
md7 : active raid1 sata2p3[0]
15615155200 blocks super 1.2 [1/1] [U]
md6 : active raid1 sata3p3[0]
989480256 blocks super 1.2 [1/1] [U]
md3 : active raid1 nvme0n1p3[0]
1948792384 blocks super 1.2 [1/1] [U]
md1 : active raid1 sata1p2[0] sata3p2[3] sata2p2[2] sata4p2[1]
2097088 blocks [4/4] [UUUU]
md0 : active raid1 sata1p1[0] sata3p1[3] sata2p1[2] sata4p1[1]
8388544 blocks [4/4] [UUUU]
unused devices:
7、将指定设备添加到阵列
如果是作BASIC,记下最后一个存储池的数字(比如我的是md7)md0是系统分区,而md1是系统交换。执行:
mdadm --create /dev/md8 --level=1 --raid-devices=1 --force /dev/nvme1n1p3
来创建NVME存储池。
当本次我要做的是将分区添加到现有阵列,执行:
#mdadm --add /dev/md3 /dev/nvme1n1p3
mdadm: added /dev/nvme1n1p3
# cat /proc/mdstat
可以看到有Spare标记:
md3 : active raid1 nvme1n1p3[1](S) nvme0n1p3[0]
1948792384 blocks super 1.2 [1/1] [U]
系统界面上能看到硬盘状态改变了,也已加入存储池2:
新硬盘由“未初始化”状态变为“正常”
8、变成真正的RAID1
将新盘由Spare变为RAID1:
#mdadm --grow /dev/md3 --raid-disks=2
raid_disks for /dev/md3 set to 2
此时能看到检查进度:
# cat /proc/mdstat
md3 : active raid1 nvme1n1p3[1] nvme0n1p3[0]
1948792384 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.0% (379328/1948792384) finish=342.4min speed=94832K/sec
在“存储管理器”的存储池2界面也能看到同步进度:
硬盘检查中
# cat /proc/mdstat
md3 : active raid1 nvme1n1p3[1] nvme0n1p3[0]
1948792384 blocks super 1.2 [2/1] [U_]
[=>...................] recovery = 8.7% (171203968/1948792384) finish=110.4min speed=268125K/sec
Spare标记消失了。看这个同步速度268MB/s,远低于m2,估计受限于DS423+总线。
查看RAID信息:
# mdadm -D /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sun Jun 18 14:04:20 2023
Raid Level : raid1
Array Size : 1948792384 (1858.51 GiB 1995.56 GB)
Used Dev Size : 1948792384 (1858.51 GiB 1995.56 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Jun 22 12:07:07 2024
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 6% complete
Name : ****NAS:3 (local to host ****NAS)
UUID : af668548:580003df:0ab9e603:f964f99e
Events : 367
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n1p3
1 259 7 1 spare rebuilding /dev/nvme1n1p3
数据同步完成后再次查看:
# cat /proc/mdstat
md3 : active raid1 nvme1n1p3[1] nvme0n1p3[0]
1948792384 blocks super 1.2 [2/2] [UU]
# mdadm -D /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sun Jun 18 14:04:20 2023
Raid Level : raid1
Array Size : 1948792384 (1858.51 GiB 1995.56 GB)
Used Dev Size : 1948792384 (1858.51 GiB 1995.56 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Jun 22 15:07:22 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : *NAS:3 (local to host *NAS)
UUID : af668548:580003df:0ab9e603:f964f99e
Events : 5008
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n1p3
1 259 7 1 active sync /dev/nvme1n1p3
系统界面确认,至此,RAID1组建完毕。
存储池2成了2个m.2组成的RAID1
总结:
通过命令行方式将m2固态该作存储,并组建RAID1。本文中命令及输出我没用截图而是贴了文本,供同好们复制的同时,特别提醒要修改参数(如md3还是md5)为符合自己设备的。
下一篇进行DSM系统升级。