In https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.10-e2e-aws-ovn-upgrade/1494174529876922368 [sig-arch] events should not repeat pathologically expand_less 0s 2 events happened too frequently event happened 30 times, something is wrong: ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator - reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotWebhookControllerProgressing: 1 out of 2 pods running") event happened 31 times, something is wrong: ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator - reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well")
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-e2e-azure-ovn-upgrade/1499508868990898176 This job happens after the PR merged which still contains the error, will monitor longer.
I've seen two failues in release-openshift-okd-installer-e2e-aws-upgrade, but they seem to be caused by something else (the cluster seems unhealthy): https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-okd-installer-e2e-aws-upgrade/1500773368465461248 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-okd-installer-e2e-aws-upgrade/1500672696625664000
Did not say such failure in https://prow.ci.openshift.org/job-history/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-e2e-azure-ovn-upgrade after that, update status to "VERIFIED".
If it's ok to re-use this bug, this problem appears to still exist, albeit quite rare. https://search.ci.openshift.org/?search=something+is+wrong.*CSISnapshotWebhookControllerProgressing%3A+1+out+of+2+pods+running&maxAge=168h&context=1&type=bug%2Bjunit&name=4.11&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job Presently shows a few hits in the last week for 4.11 jobs, https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-ovn-upgrade/1534420535566405632 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1533773073327591424 https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1533210671909441536 Each hit is 30-40 times, which is a little odd and likely indicates a problem in the operator allowing it to fire the same event so many times. Should we re-open this? New bug?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069