Bug 2033208 - [GSS] ceph data pool is reporting high usage
Summary: [GSS] ceph data pool is reporting high usage
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.8
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Patrick Donnelly
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-16 08:34 UTC by Sonal
Modified: 2023-12-08 04:27 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-02-03 08:14:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Sonal 2021-12-16 08:34:32 UTC
Description of problem (please be detailed as possible and provide log
snippests):

- 'ceph df' command is reporting very high usage of data pool, whereas from 'df -h' the actual usage is very little.

- No snapshots and clones of csi-vol present.

- There are 10 orphan csi-vol present, which are consuming only a few mb.s

Version of all relevant components (if applicable):
OCS 4.8

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, the cluster is 99% full. 

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
Yes, in customer's environment

Can this issue reproduce from the UI?
No

If this is a regression, please provide more details to justify this:
NA


Actual results:
- High consumption of ceph data pool.

Expected results:
- As there are no snapshots and clones, and cephfs pv's are only a few Gb's, data pool should not consume much space.

Additional info:
In the next private comment.

Comment 13 Red Hat Bugzilla 2023-12-08 04:27:08 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.