Bug 2033208

Summary: [GSS] ceph data pool is reporting high usage
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sonal <sarora>
Component: cephAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Elad <ebenahar>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.8CC: akrai, bniver, hnallurv, madam, mmuench, muagarwa, nravinas, ocs-bugs, odf-bz-bot, sostapov
Target Milestone: ---Flags: sarora: needinfo? (akrai)
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-02-03 08:14:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sonal 2021-12-16 08:34:32 UTC
Description of problem (please be detailed as possible and provide log
snippests):

- 'ceph df' command is reporting very high usage of data pool, whereas from 'df -h' the actual usage is very little.

- No snapshots and clones of csi-vol present.

- There are 10 orphan csi-vol present, which are consuming only a few mb.s

Version of all relevant components (if applicable):
OCS 4.8

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, the cluster is 99% full. 

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
Yes, in customer's environment

Can this issue reproduce from the UI?
No

If this is a regression, please provide more details to justify this:
NA


Actual results:
- High consumption of ceph data pool.

Expected results:
- As there are no snapshots and clones, and cephfs pv's are only a few Gb's, data pool should not consume much space.

Additional info:
In the next private comment.