Skip to main content
Delphix

Why does my storage utilization reported by VMware for datastore not agree with Delphix's reported capacity?

Problem

The nature of the Delphix Filesystem (DxFS) is such that, over time, the storage consumption reported by the Delphix Engine will not match that of VMware in the event that thin provisioning is used. This article will explore an example of this.

Details

By default, VMware will thin provision VMDKs from NFS storage (v5.0 and later allow this to be overridden). Some VMware administrators also simply use thin provisioning as a standard practice in their environments; though, as a best practice; Delphix strongly recommends VMDKs be provisioned “Eager Zeroed Thick”.

As dSources and virtual databases (VDBs) change over time, DxFS - due to its copy-on-write technology, will free blocks and write new blocks elsewhere on the device. However, as DxFS does not explicitly free (via the SCSI UNMAP operation) these now-unused blocks (this is only tracked internally by DxFS), the hypervisor is unaware of these freed blocks. So, as an example, we could end up in a situation where Delphix is only using 30% of its resources but VMware reports 90% consumption.   

To illustrate this concept, consider the simple ASCII rendering below. On the left is a ZFS perspective, where no more than 4 blocks are allocated (A) at any given time; the remaining blocks are free (F). 

 

ZFS      THIN PROVISIONED STORAGE
FFFFFFFFFF  FFFFFFFFFF
AAFFFFFFFF  AAFFFFFFFF
AAAAFFFFFF  AAAAFFFFFF
FFAAAAFFFF  AAAAAAFFFF
FFAAFFAAFF  AAAAAAAAFF
FFFFAFAAAF  AAAAAAAAAF

 

Below is an example of what this actually looks like in the Delphix Enginess vs VMware. On the right is the thin-provisioned storage perspective, as observed from the storage array and from VMware. Because ZFS doesn’t explicitly free the blocks from the storage perspective, it considers the storage 90% “utilized”, even though ZFS is only using 40%.

This snapshot is of the current storage utilization of a Delphix Engine in a lab environment.

 

Exploring the specific disk utilization using our Delphix service account, we see each storage device assigned to Delphix for operations (domain0) currently using ~510-520MB (in this context, “ALLOC” indicates blocks allocated, or used, by the filesystem):

NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
domain0     23.8G  1.52G  22.3G         -    17%     6%  1.00x  ONLINE  -
  c2t2d0    7.94G   524M  7.43G         -    17%     6%
  c2t3d0    7.94G   512M  7.44G         -    17%     6%
  c2t1d0    7.94G   519M  7.43G         -    17%     6%
rpool       23.9G  9.18G  14.7G         -    32%    38%  1.00x  ONLINE  -
  c2t0d0s0  23.9G  9.18G  14.7G         -    32%    38%

 

However, when exploring the datastore in vSphere, the reported usage is drastically different. In this example, current VMDK size ~2.5GB.

 

This discrepancy is due to the thin provisioned VMDKs having blocks allocated and freed internally by DxFS and not being reflected as a transparent action to VMware. Eventually, the “Size” will scale up to a major percentage of the “Provisioned Size”, but the Capacity reported by the Delphix Engine should always be the value trusted to understand how much storage is actually being utilized, and decisions for expanding should be based on that calculation as it is what DxFS takes into account during Delphix specific processes.

  • Was this article helpful?