Bug 2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically
Summary: [cluster-csi-snapshot-controller-operator] CI failure: events should not repe...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.11
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.11.0
Assignee: Fabio Bertinatto
QA Contact: Wei Duan
URL:
Whiteboard:
Depends On:
Blocks: 2061343 2062197
TreeView+ depends on / blocked
 
Reported: 2022-02-22 17:26 UTC by Fabio Bertinatto
Modified: 2022-08-10 10:51 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2061343 (view as bug list)
Environment:
Last Closed: 2022-08-10 10:50:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-csi-snapshot-controller-operator pull 114 0 None open Bug 2057079: Fix race when setting Progressing condition 2022-03-01 19:45:49 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 10:51:02 UTC

Description Fabio Bertinatto 2022-02-22 17:26:46 UTC
In https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.10-e2e-aws-ovn-upgrade/1494174529876922368

[sig-arch] events should not repeat pathologically expand_less 	0s
2 events happened too frequently

event happened 30 times, something is wrong: ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator - reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotWebhookControllerProgressing: 1 out of 2 pods running")
event happened 31 times, something is wrong: ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator - reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well")

Comment 2 Wei Duan 2022-03-04 03:11:54 UTC
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-e2e-azure-ovn-upgrade/1499508868990898176
This job happens after the PR merged which still contains the error, will monitor longer.

Comment 4 Fabio Bertinatto 2022-03-07 13:01:59 UTC
I've seen two failues in release-openshift-okd-installer-e2e-aws-upgrade, but they seem to be caused by something else (the cluster seems unhealthy):
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-okd-installer-e2e-aws-upgrade/1500773368465461248
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-okd-installer-e2e-aws-upgrade/1500672696625664000

Comment 5 Wei Duan 2022-03-09 09:48:03 UTC
Did not say such failure in https://prow.ci.openshift.org/job-history/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-e2e-azure-ovn-upgrade after that, update status to "VERIFIED".

Comment 8 errata-xmlrpc 2022-08-10 10:50:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.