Create ZFS Storage in Proxmox VE

by RamWise
Published: Last Updated on 32K views

ZFS is one of the best file systems available in almost all of the Linux distributions. Proxmox VE is built on Debian Linux and it supports ZFS backend for VM and container storage. This tutorial will help you to create a ZFS pool with multiple drives quickly and add it to Proxmox VE for VM and Container storage.

If you would like to start by installing Proxmox VE instead go to my post here. You can continue back here to create a ZFS pool in Proxmox after that.

1. Install the hard disks and check if they are visible under Disks menu on the PVE web interface. I have two drives that are going to be used for the ZFS Pool. (Remember ZFS works well with non-RAID storage controllers.)

2. To prepare the disks for ZPOOL creation, first wipe them.

wipefs -a /dev/sdb /dev/sdc 

3. There are a variety of ZFS Pool types, like the conventional RAID levels and RAID-Z modes are supported. by default. 

ZPOOL RAID Options

     mirror  A mirror of two or	more devices. Data is replicated in an identi-
	     cal fashion across	all components of a mirror. A mirror with N
	     disks of size X can hold X	bytes and can withstand	(N-1) devices
	     failing before data integrity is compromised.

     raidz   (or raidz1	raidz2 raidz3).	 A variation on	RAID-5 that allows for
	     better distribution of parity and eliminates the "RAID-5" write
	     hole (in which data and parity become inconsistent	after a	power
	     loss).  Data and parity is	striped	across all disks within	a
	     raidz group.

	     A raidz group can have single-, double- , or triple parity, mean-
	     ing that the raidz	group can sustain one, two, or three failures,
	     respectively, without losing any data. The	raidz1 vdev type spec-
	     ifies a single-parity raidz group;	the raidz2 vdev	type specifies
	     a double-parity raidz group; and the raidz3 vdev type specifies a
	     triple-parity raidz group.	The raidz vdev type is an alias	for
	     raidz1.

	     A raidz group with	N disks	of size	X with P parity	disks can hold
	     approximately (N-P)*X bytes and can withstand P device(s) failing
	     before data integrity is compromised. The minimum number of de-
	     vices in a	raidz group is one more	than the number	of parity
	     disks. The	recommended number is between 3	and 9 to help increase
	     performance.

    We are going to create a mirrored ZPOOL with name tank from two drives because it performs the best to my knowledge.

zpool create tank mirror /dev/sdb /dev/sdc 

4. As you see above, we created zpool with the disk mount points (/dev/sda, /dev/sdb). However, Disk mount points are bound to change after reboots or hardware changes, resulting in ZPOOL degradation. Hence we will make the zpool use block device identifiers (/dev/disk/by-id) instead of mount-points.

  Exporting the ZFS pool, and importing back with /dev/disk/by-id immediately will seal the disk references.

#export the pool
zpool export tank

#import back with by-id
zpool import -d /dev/disk/by-id tank 

To verify this further you can go to Disks -> ZFS, select the zpool ‘tank’ and click Detail. It should show /dev/disk/by-id in the disk assignments.

5. Now the ZFS pool is successfully created, we can go ahead and add it to PVE for storage with the command below.

pvesm add zfspool tank -pool tank 

   To verify go to the web UI, under Disks -> ZFS select your zpool. The new pool tank is ONLINE and healthy.

You can now use this new ZFS pool for Disk Image, Containers in Proxmox VE.

2 comments

Bruce March 24, 2022 - 7:00 pm

The second time I created zfs storage is didn’t show up as a pool, so the command: ‘pvesm add zfspool tank -pool tank ‘ did it for me. Thanks

Reply
Jordan August 5, 2023 - 11:49 am

Thanks for this, found it in a google search and as Bruce said, that command is what saved the day for me too.

Reply

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More