Description of problem (please be detailed as possible and provide log snippests): Cu wanted to reduce size of ODF cluster. Cu followed KCS [1] to reduce size of cluster. Ceph cluster initially had three OSDs per node over three nodes [3]. Cu successfully downsized the cluster by removing osd.4 from rack0. Cu closed case and then went on to remove more OSDs. OSDs in Ceph cluster now look like this [4]. OSDs 6, 3 and 1 have been removed. OSD.0 (the only OSD in rack0) is down due to OOM killed. Logs from osd.0 indicate it was killed [5]. From osd.0 yaml "rook-ceph-osd-0-5d7bc94f6f-r8sqk.yaml", osd.0 was OOM killed [6]. Also, there were many peering events for osd.0 [7]. In addition, pool "ocs-storagecluster-cephfilesystem-data0" has PGs increasing 55 -> 63 [8]. KCS [2] seemed to match, however, the diagnostics for OSD.0 could not be retrieved (as it is down due to being OOM killed). Mempool data for other OSDs from Diagnostics section of KCS [2] seem to be ok [9]. Cu increased memory for OSD.0, but issue still occurring. Cu uploaded osd.0 debug logs and mon logs. Logs available on supportshell: * /cases/03459595/0080-rook-ceph-osd-0-logs-2min.txt * /cases/03459595/0090-rook-ceph-mon-ad-logs.txt * /cases/03459595/0100-rook-ceph-mon-ae-logs.txt * /cases/03459595/0110-rook-ceph-mon-af-logs.txt * Steps to remove unwanted or failed Ceph OSD in Red Hat OpenShift Data Foundation(previously known as Red Hat OpenShift Container Storage) [1] https://access.redhat.com/solutions/5015451 * Ceph OSD pods in CLBO state due to PG Dup log issue in ODF Environment [2] https://access.redhat.com/solutions/6987599 Version of all relevant components (if applicable): * ODF 4.10.10 is installed [10]. * Ceph version 16.2.7-126.el8cp is used by ODF [11]. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? * Yes. Ceph OSDs are unbalanced [12]. "Rack0" only has 1 OSD, whereas the other racks each have 2 OSDs. In addition, all OSDs in rack0 are down. As redundancy in Ceph is set to 3 (default), Ceph cannot maintain redundancy across 3 hosts as there aren't enough distinct hosts with available OSDs. Is there any workaround available to the best of your knowledge? No. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 3 Can this issue reproducible? No. Can this issue reproduce from the UI? No. If this is a regression, please provide more details to justify this: N/A. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: