Bug 2011420 - [IBM Z] Storage reclaim not working after cleaning up the pvc's
Summary: [IBM Z] Storage reclaim not working after cleaning up the pvc's
Keywords:
Status: CLOSED DUPLICATE of bug 1943137
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.9
Hardware: s390x
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-06 14:58 UTC by Abdul Kandathil (IBM)
Modified: 2023-08-09 16:37 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-14 16:03:39 UTC
Embargoed:


Attachments (Terms of Use)

Description Abdul Kandathil (IBM) 2021-10-06 14:58:37 UTC
Description of problem (please be detailed as possible and provide log
snippets):
Observed during ocs-ci test "tests/manage/monitoring/prometheus/test_capacity.py::test_rbd_capacity_workload_alerts", which fills up storage till near-full and then clean it up. But after clean up of the PVCs, storage reclaim did not happen and the cluster is still staying with CephClusterNearFull warning.


Version of all relevant components (if applicable):
odf 4.9.0-164.ci

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
After this stage, the only solution is to scale up the cluster. 


Is there any workaround available to the best of your knowledge?
After this stage, the only solution is to scale up the cluster.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Intermittently reproducable

Can this issue reproduce from the UI?
no

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install OCP, OCS
2. Install OCS-CI
3. Execute the ocs-ci test. I think this may be reproducable if we simply fill up a PVC that consumes more that 75% of ceph cluster capacity

Actual results:


Expected results:


Additional info:
Logs along with must-gather available in google drive: https://drive.google.com/file/d/1JCn4cwgJ4LTVdpdfWa94UWPOPxFn4jou/view?usp=sharing

Comment 2 Niels de Vos 2021-10-11 13:25:53 UTC
This probably is a duplicate of bug 1810525, please review and close it if you agree.

Bug 1810525 should have been addressed with build v4.9.0-182.ci of ODF.

Comment 3 Abdul Kandathil (IBM) 2021-10-11 16:21:57 UTC
I dont have access to the BZ mentioned

Comment 4 Mudit Agarwal 2021-10-11 16:28:36 UTC
Check again please.

Comment 5 Abdul Kandathil (IBM) 2021-10-11 16:45:45 UTC
Looks to be similar, but the difference is that the cluster didn't reach a osd-full-ratio threshold which is 85%.
This cluster mentioned in this BZ is staying constantly at CephClusterNearFull which is 75%. PVCs were able to delete, but it didn't release the storage space after PVC cleanup.

Comment 12 Mudit Agarwal 2021-10-14 16:03:39 UTC

*** This bug has been marked as a duplicate of bug 1943137 ***


Note You need to log in before you can comment on or make changes to this bug.