ZFS Compression and Deduplication Demo

ZFS Compression and Deduplication Demo

1/ Deduplication in ZFS

Suppose that on my ZFS Server I have the following disks

I am going to use /dev/sdk and /dev/sdm for a mirror pool called dalpool.

zpool create dalpool mirror –f /dev/sdk /dev/sdm

zfs list

zpool status dalpool

Right now the pool is mounted as /dalpool. Let’s create a file with random data called randfile.

cd /dalpool

dd if=/dev/urandom of=randfile bs=1M count=100

zpool list dalpool

Now we have the file called randfile.

After having the file, we can now play with the deduplication and compression features. First, let’s demonstrate deduplication.

Let’s copy the randfile to randfile2

ls -la

cp randfile randfile2

zpool list dalpool

Note that the ALLOC jumped from 37.4M to 201M.

Now let’s create a new dataset under dalpool/deduplicated, then move those two duplicate files into /dalpool/deduplicated and see what happens.

Zfs create –o dedup=on dalpool/deduplicated


mv rand* deduplicated/

Let’s look at the ALLOC now. It changes from 201M to 102M. Dramatic changes!

Let’s once again, duplicate the randfile to anew file called randfile3.

Now let’s check the deduplication status.

As you can see that the ALLOC is still 102M and DEDUP is not 3.00x, which is clearly very efficient.

2/ Compression in ZFS

Let’s create a new dataset called dalpool/compressed and turn on Lz4 compression for it.

Under /dalpool I have created a directory called conf and I copied all /etc/*.conf files to this /dalpool/conf directory. At this point, these files are not being compressed.

The ALLOC at this point is 910M

Let’s move /dalpool/conf/ into /dalpool/compressed/

Let’s check the ALLOC again.

As you can see, instead of 910M it goes down to 102M. This means that compression works great in ZFS and it is beneficial for us to use it. Both data deduplication and compression give us the great ability to control our starage space in terms of space usage.