Description of problem:
I have a 4.8 cluster with 2 workers. Every time I upgrade that cluster, this alert fires and requires manual intervention to resolve.
The situation is that, during upgrade, each worker is cordoned and drain in sequence. The result is the following:
worker-0 is cordoned and drained
prometheus-k8s-0 running on worker-0 is evicted to worker-1 where prometheus-k8s-1 is already running
worker-0 upgrades and becomes schedulable
worker-1 is cordoned and drained
both prom pods are evicted and restarted on worker-0
worker-1 upgrades and becomes schedulable again
The final result is that both prom pods are running on the same worker. With two workers, this happens every time.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
This is intended to some extent since we want to prevent single point of failures, but I can see how this can become annoying. Ideally, we would want to switch to hard-affinity on hostname and have a PDB which would prevent this from happening, but that effort is currently blocked by bug 1995924.
That said, now that we have taken a new approach with how we want to handle high availability when persistent storage is enabled in the form of bug 1995924, we might want to consider removing the `HighlyAvailableWorkloadIncorrectlySpread` alert completely.
checked with PR, HighlyAvailableWorkloadIncorrectlySpread is removed, but we need #1489 to be merged first.
fix is in 4.10.0-0.nightly-2021-12-21-130047, HighlyAvailableWorkloadIncorrectlySpread is removed, set to VERIFIED
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.