etcdInsufficientMembers is a normal part of upgrade. It should not fire during an upgrade if it is on an unschedulable master that has been marked unschedulable for drain for some duration less than 15-30m. Once that threshold is exceeded it should fire. So the preconditions for when it should not fire are when the current node has been unschedulable for less than 25m, which is roughly the amount of time a slow bare metal node should take to drain the master, get rebooted, and upgrade (measured as about 10m total on cloud with 2m reboot). If in practice 25m is not enough, we should consider extending the grace period. The current alert query is sum(up{job=~".*etcd.*"} == bool 1) without (instance) < ((count(up{job=~".*etcd.*"}) without (instance) + 1) / 2) The alert for openshift should be: (count of instances that are up) < ((instances that could be up and are not on nodes that have been continuously unschedulable for less than X m + 1) / 2) The quorum lost alerts will handle the rest.
Oops, I meant etcdMembersDown. Insufficient members is usually "lost quorum".
Discovered https://bugzilla.redhat.com/show_bug.cgi?id=1929944 while investigating this.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.