Recently I decided to expand the storage capacity of my ZFS home fileserver’s ZFS storage pool, which was configured to use a single RAIDZ vdev comprising of three 750 GB drives.
I wanted to expand the RAIDZ vdev by adding one extra 750 GB drive.
As you may or may not be aware, it is currently not possible to expand a RAIDZ vdev in ZFS, which is a great pity.
However, it is still possible to achieve this expansion goal, but you have to perform a few tricks. Here is how I did it.
The way to achieve RAIDZ vdev expansion is to destroy the pool and re-create it, using the additional drive(s).
Of course, this means you need to back up all the data first, so you’ll need to have an additional storage location available, which has large enough capacity.
I decided to do two full backups of all my data.
The first backup I did was by attaching one additional SATA drive, typing ‘format’ to see the id of the new drive. Once I had the id of the new drive, I created a second ZFS storage pool that used only this new drive:
# zpool create backuppool c3t0d0
Once the backup pool had been created, it was simply a matter of copying the file systems from the main storage pool to the new backup pool. Of course, if the backup pool required more capacity than a single drive provides, then two drives can be used.
Once the transfer of all the file systems had completed, I made a second full backup by copying the file systems to an iSCSI-connected storage pool on a separate backup server. For setup info see: Home Fileserver: Backups.
After both full backups had been made, I verified that the backups had been successful. All seemed well.
The next step was to destroy the main storage pool comprising of the three-drive RAIDZ1 vdev (single parity RAIDZ vdev, as opposed to RAIDZ2 which uses double parity). This moment is not a pleasant one, but with the backups done, nothing could go wrong, could it?
So the storage pool ‘terminator’ command I executed was:
# zpool destroy tank
The next step was to re-create the new ZFS storage pool using the additional drive, which I had plugged in. First step is to get all the drive ids with which to create the new pool. For this, I used the good old format command:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0
/pci@0,0/pci-ide@4/ide@0/cmdk@0,0 1. c1t0d0 /pci@0,0/pci1043,8239@5/disk@0,0 2. c1t1d0 /pci@0,0/pci1043,8239@5/disk@1,0 3. c2t0d0 /pci@0,0/pci1043,8239@5,1/disk@0,0 4. c2t1d0 /pci@0,0/pci1043,8239@5,1/disk@1,0 Specify disk (enter its number): ^C #
Then, using the drive ids from above, I created the new storage pool comprising a single RAIDZ1 vdev using the four drives:
# zpool create tank raidz1 c1t0d0 c1t1d0 c2t0d0 c2t1d0
After creating the new storage pool, I took the opportunity of reviewing the file system layout and making a few changes by creating a new file system hierarchy that better suited my needs.
Once I had created the file systems using the new layout, the next step was to copy the file system data back from the backup pool to the new main storage pool.
After I had successfully copied all the data back to its new larger storage pool, I decided to unplug the backup pool drive from the case, and keep it on the shelf in case it was needed again in future. This was done by exporting the backup pool to flush any pending writes before removing the drive from the case:
# zpool export backuppool
So, the end result of this work is that I now have a storage pool using a single RAIDZ1 vdev but utilising four drives instead of three. Using 750GB drives, this means instead of a pool comprising ~1.4 TB for data capacity and ~0.7 TB for parity data, the new setup with 4 drives gives ~2.1 TB for data and ~0.7 TB for parity data. Nice! Grrrrreat.