Got a MC-32 from ovh last year on Black Friday. Recently I have reinstalled it to Proxmox VE 5.0, with the preinstallation template that comes with ZFS provided by ovh.
UNCONFIGURED RAID
MC-32 has 2x240GB SSD with soft raid, I wanted to build a RAID 1 using the 2 SSDs. Since Proxmox VE 5.0 doesn’t support soft raid, I thought that the installation template would build a ZFS RAID 1 using 2 SSDs but unfortunately it didn’t.
After the installation had finished, I found that only one SSD was used.
Here is the partition table after the installation.
$ fdisk -l
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FF368BD5-2B0B-404B-AE3C-28B3F2D64E9E
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 468845710 468843663 223.6G Solaris /usr & Apple ZFS
/dev/sda9 468845710 468862094 16385 8M Solaris reserved 1
Disk /dev/sdb: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
/dev/sdb
was not used. More importantly, rootfs had been installed to a ZFS pool named rpool
, and rpool
was using the whole partition /dev/sda2
.
REBUILDING RAID IN ZFS
I have a running disk which had data in it and an empty disk. What to do in order to have RAID 1 on these 2 disks? Fortunately I was using ZFS, it’s rather simple. Let’s rebuild the RAID 1 using mirror in ZFS.
I wanted to create the same partition table on /dev/sdb
, mirror all data from /dev/sda2
to /dev/sdb2
, and have rpool
backed by 2 partitions in 2 disks which are mirrored.
(Optional) Wipe the disk to be used as the mirrored disk.
This is an irreversible operation, it will permanently all delete the existing data.
shred -vz /dev/sdb
Copy the partition table from
/dev/sda
to/dev/sdb
sgdisk --replicate=/dev/sdb /dev/sda
Randomize the disk’s GUID and all partitions’ unique GUIDs of
/dev/sdb
to prevent conflictsgdisk --randomize-guids /dev/sdb
Install boot partition to
/dev/sdb
grub-install /dev/sdb
Mirror the current data from
/dev/sda2
to/dev/sdb2
By attaching
/dev/sdb2
torpool
as a mirror device, ZFS will automatically sync the data.zpool attach -f rpool /dev/sda2 /dev/sdb2
Check the mirroring progress by the following command.
zpool status -v
End up /dev/sda2
and /dev/sdb2
formed a mirror for rpool
.
$ zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0h35m with 0 errors on Sun Oct 8 00:59:39 2017
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
ZFS AND PROXMOX VE
ZFS plays well with Proxmox VE and it’s mirrored zpool covers the drawback of Proxmox VE not supporting soft raid. Sometimes you just want to pick up a budget server with limited options, like without hardware raid, and play with Proxmox VE while keeping your valuable data safe by utilizing multiple disks, ZFS seems to be a good option. But if you have multiple hosts, maybe Proxmox VE on Ceph is your friend.