The fix is https://github.com/openshift/library-go/pull/930 . Per info of bug 1890724, the attached bumping PRs are: For KCM: https://github.com/openshift/cluster-kube-controller-manager-operator/pull/472 For KS: https://github.com/openshift/cluster-kube-scheduler-operator/pull/294 For KAS: https://github.com/openshift/cluster-kube-apiserver-operator/pull/992, but this is still open. Further checking found KAS another merged PR https://github.com/openshift/cluster-kube-apiserver-operator/pull/993/files bumped the fix in vendor/github.com/openshift/library-go/pkg/operator/staticpod/controller/staticpodstate/staticpodstate_controller.go, so https://github.com/openshift/cluster-kube-apiserver-operator/pull/992 need rebase or close. In not fixed version, check: oc logs deployment/kube-controller-manager-operator -n openshift-kube-controller-manager-operator oc logs deployment/openshift-kube-scheduler-operator -n openshift-kube-scheduler-operator oc logs deployment/kube-apiserver-operator -n openshift-kube-apiserver-operator During normal rollouts, all KCM/KS/KAS static pod operators have below noisy OperatorStatusChanged logs with misleading "not ready: unknown reason": ... reason: 'OperatorStatusChanged' ... StaticPodsDegraded: pod/kube- ... container ... is running for 17.065599242s but not ready: unknown reason ... Verified in 4.7.0-0.nightly-2020-11-03-002310 env, all KCM/KS/KAS static pod operators don't have such misleading logs now.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
The LifecycleStale keyword was removed because the bug got commented on recently. The bug assignee was notified.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Whiteboard if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
Dear reporter, As part of the migration of all OpenShift bugs to Red Hat Jira, we are evaluating all bugs which will result in some stale issues or those without high or urgent priority to be closed. If you believe this bug still requires engineering resolution, we kindly ask you to follow this link[1] and continue working with us in Jira by recreating the issue and providing the necessary information. Also, please provide the link to the original Bugzilla in the description. To create an issue, follow this link: [1] https://issues.redhat.com/secure/CreateIssueDetails!init.jspa?pid=12332330&issuetype=1&priority=10300&components=12367637