Description of problem (please be detailed as possible and provide log snippets): Observed during ocs-ci test "tests/manage/monitoring/prometheus/test_capacity.py::test_rbd_capacity_workload_alerts", which fills up storage till near-full and then clean it up. But after clean up of the PVCs, storage reclaim did not happen and the cluster is still staying with CephClusterNearFull warning. Version of all relevant components (if applicable): odf 4.9.0-164.ci Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? After this stage, the only solution is to scale up the cluster. Is there any workaround available to the best of your knowledge? After this stage, the only solution is to scale up the cluster. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Intermittently reproducable Can this issue reproduce from the UI? no If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Install OCP, OCS 2. Install OCS-CI 3. Execute the ocs-ci test. I think this may be reproducable if we simply fill up a PVC that consumes more that 75% of ceph cluster capacity Actual results: Expected results: Additional info: Logs along with must-gather available in google drive: https://drive.google.com/file/d/1JCn4cwgJ4LTVdpdfWa94UWPOPxFn4jou/view?usp=sharing
This probably is a duplicate of bug 1810525, please review and close it if you agree. Bug 1810525 should have been addressed with build v4.9.0-182.ci of ODF.
I dont have access to the BZ mentioned
Check again please.
Looks to be similar, but the difference is that the cluster didn't reach a osd-full-ratio threshold which is 85%. This cluster mentioned in this BZ is staying constantly at CephClusterNearFull which is 75%. PVCs were able to delete, but it didn't release the storage space after PVC cleanup.
*** This bug has been marked as a duplicate of bug 1943137 ***