Description of problem (please be detailed as possible and provide log snippests): status: conditions: - lastProbeTime: null lastTransitionTime: "2023-04-28T08:43:20Z" message: '0/9 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 6 node(s) didn''t match Pod''s node affinity/selector. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling..' reason: Unschedulable The error pointed out and found by Subham Rai: rookcmd: failed to run operator: gave up to run the operator manager: failed to set up overall controller-runtime manager: error listening on :8080: listen tcp :8080: bind: address already in use Version of all relevant components (if applicable): v4.13.0-178 on top of nightly OCP 4.13 build Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes Can this issue reproduce from the UI? Haven't tried If this is a regression, please provide more details to justify this: Yes, it worked in 4.12 Steps to Reproduce: 1. Install ODF 4.13 on vSphere with 6 nodes as arbiter enabled deployment 2. 3. Actual results: Mon pods are not scheduled Expected results: Have all mon pods scheduled and running Additional info:
can you try to repo in the arbiter cluster again, and share the cluster? It's difficult for us to have a similar cluster and do repro.
Yes this duplicate of 2187952, I disabled the exporter and everything was working. So closing this as duplicate. *** This bug has been marked as a duplicate of bug 2187952 ***