Description of problem: During a data rebalance (no failed OSDs), the system may report degraded objects. This is alarming and misleading (redundancy is not in fact degraded). Trivially reproducible on a cluster under any load. How reproducible: Easy Steps to Reproduce: 1. ceph osd pool create foo 64 2. rados -p foo bench 30 write -b 4096 --no-cleanup 3. ceph osd out 0 4. rados -p foo bench 30 write -b 4096 --no-cleanup & 5. watch ceph -s Actual results: Shows degraded objects Expected results: Only misplaced objects should be shown.
upstream PR: https://github.com/ceph/ceph/pull/18297
(In reply to Sage Weil from comment #3) > upstream PR: https://github.com/ceph/ceph/pull/18297 This was merged ~1.5 years ago. Can we CLOSE-UPSTREAM this?
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days