Description of problem (please be detailed as possible and provide log snippests): After add capacity test we see that the ceph state getting stuck in this state: pgs: 166 active+clean 2 active+clean+scrubbing 1 active+clean+scrubbing+deep Version of all relevant components (if applicable): I've noticed that it has started happening from build: quay.io/rhceph-dev/ocs-registry:4.13.0-198 In quay.io/rhceph-dev/ocs-registry:4.13.0-197 it worked well. But from build -198 it's reproducible in every run. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, it's blocking our testing framework as cluster is not in good state. Is there any workaround available to the best of your knowledge? NO Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes - from build -198 Can this issue reproduce from the UI? not relevant If this is a regression, please provide more details to justify this: Yes Steps to Reproduce: 1. Do add capacity 2. Rebalance is not finishing in timeout we used to have 3. Actual results: PGs: 2 active+clean+scrubbing 1 active+clean+scrubbing+deep Expected results: Have all PGS active+clean Additional info:
Hello Petr, We have been working on a fix that is currently being tested and reviewed to reduce osd_scrub_cost in order to speed up scrubs with mClock. The new scrub cost would be 102400 ( osd_scrub_chunk_max (25) * 4KiB) You can change osd_scrub_cost using the following command: ceph config set osd 102400 To check osd_scrub_cost after modifying it: ceph config show osd.1 osd_scrub_cost Regards, Aishwarya
Sorry about that! ceph config set osd osd_scrub_cost 102400 This should work fine.
Aishwarya, do we have a Ceph BZ or tracker where the fix you are working upon can be tracked by ODF?
Hi Mudit, The fix is being tracked here - https://tracker.ceph.com/issues/61313, it is currently under review.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742