.During a data rebalance of a Ceph cluster, the system might report degraded objects
Under certain circumstances, such as when an OSD is marked out, the number of degraded objects reported during a data rebalance of a Ceph cluster can be too high, in some cases implying a problem where none exists.
Description of problem:
During a data rebalance (no failed OSDs), the system may report degraded objects. This is alarming and misleading (redundancy is not in fact degraded). Trivially reproducible on a cluster under any load.
Steps to Reproduce:
1. ceph osd pool create foo 64
2. rados -p foo bench 30 write -b 4096 --no-cleanup
3. ceph osd out 0
4. rados -p foo bench 30 write -b 4096 --no-cleanup &
5. watch ceph -s
Shows degraded objects
Only misplaced objects should be shown.
upstream PR: https://github.com/ceph/ceph/pull/18297
(In reply to Sage Weil from comment #3)
> upstream PR: https://github.com/ceph/ceph/pull/18297
This was merged ~1.5 years ago. Can we CLOSE-UPSTREAM this?