Description of problem (please be detailed as possible and provide log snippests): After upgrade to OCP 4.9.x and OCS 4.9, from vsphere console, OSD disks are consuming a lot of storage. However from `ceph df` the consumption is not much. Version of all relevant components (if applicable): ODF 4.9.7 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No, however this is impacting capacity planning. Is there any workaround available to the best of your knowledge? fstrim could help on application (cephfs/rbd) mount point Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? In customer's environment Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: - Steps to Reproduce: 1. Deploy ODF on vmware using vSAN storage 2. Create cephfs and RBD PVC's. Perform workload on it 3. Delete data from PVC. Check storage consumption from vsphere. Actual results: vsphere shows high OSD disk usage after deleting data Expected results: vspehre should show the correct usage once data is deleted. Additional info: In next private comment