+++ This bug was initially created as a clone of Bug #2052270 +++
The new condition landed in the dev branch  after 4.9 forked off and before 4.10 forked off, so 4.10 and later are impacted.
There are a few issues with the current implementation:
* Some "treshold" -> "threshold" typos.
* The "etcd disk metrics exceeded..." reasons  aren't the CamelCase slugs reason expects .
* It seems that the condition may not get cleared once latency returns to reasonable levels, although no exact code links to back this one up.
--- Additional comment from William Caban on 2022-02-21 16:48:49 UTC ---
I've been able to reproduce the etcd staying in degraded mode and preventing the cluster from upgrades after that. In my case, crashing the cluster and rebooting the nodes, we found that on boot, one of the nodes had an SSD disk that sometimes would spike on initial access (randomly on boot), but after that, even when the disks were stable or in other reboots it was fine, etcd kept this FSyncControllerDegrated and blocked any further upgrades.
This was confirmed multiple times when upgrading from 4.9.19 to 4.10rc2 or 4.10rc3.
--- Additional comment from W. Trevor King on 2022-02-21 17:10:44 UTC ---
[The latching] is by far the most important, because a Degraded=True etcd ClusterOperator will block updates, including 4.y.z -> 4.y.z' patch updates, unless the cluster admin does some hoop-jumpy workarounds, or is updating to a release that fixes the latching behavior.
Bug 2052270 is covering the typos. Bug 2057642 is covering the slugging. This bug series picks up the third point: latching Degraded=True. I'm preserving blocker- from , but if this was fixed for 4.10.0, I would not be sad ;).
@wking, this bug is abstracted to us for verification, do you have any suggestion for how to verify this bug? thanks
From comment 0, originally from , William Caban was able to reproduce this by restarting all of his control plane nodes. How effective that is probably depends on how close your disks are to the threshold.
Tried restarting all of control plane nodes, but have not reproduce this issue, tried with 4.11.0-0.nightly-2022-03-20-160505, run some regression test with high workload, but have not hit this issue, close this bug.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.