I am running vCOps 5.8.2 at present, and primarily use vCOps at this time for capacity management. A lot of time was spent tuning the configuration policies, including an engagement with VMware. In any event, as I have been going through sites and trying to clean up storage capacity issues, either by allocating more space, planning to order more disk, or converting thin provisioned disks to thick, I have noticed that the over-commitment numbers produced by vCOps seem to be inaccurate. Below is an example. Perhaps I am misinterpreting what this metric means. if not, it seems way off.
For reference, the policy is set up for usable capacity, 20% buffer on disk space, and 0% over-commitment on disk space.
- 111 VM’s
- 458% overcommitment of disk space
- 527GB per VM average disk space allocation
- 43TB of usable disk space (54TB physical with the 20% buffer)
If each VM is 527GB on average, and there are 111 VM’s, that’s roughly 58TB of allocated space. 58TB/43TB = 134%. We validated that the VM count and the total space numbers being reported in vCOps are accurate. This being the case, I am confused about where and how vCOps is calculating the 458%. We have the policy set to have a 20% buffer, and 0% Allocation Overcommit Ratio. Is the vCOps algorithm flawed when we get into negative numbers (i.e. overallocation), or are we somehow misreading these numbers? To me, 458% overallocated means you need to go add 4.5x as much storage as you already have to get back to even, when in reality it looks as though 25 – 30TB will do the trick, rather than the 180 or so the percentage would imply.