Bug 1781109
| Summary: | [aws] Cluster operator cloud-credential is reporting a failure: 1 of 4 credentials requests are failing to sync | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Vadim Rutkovsky <vrutkovs> | |
| Component: | Cloud Credential Operator | Assignee: | Joel Diaz <jdiaz> | |
| Status: | CLOSED ERRATA | QA Contact: | Xiaoli Tian <xtian> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 4.3.0 | |||
| Target Milestone: | --- | |||
| Target Release: | 4.4.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Bug Fix | ||
| Doc Text: |
Cause: CredentialsRequests metrics improperly reporting errors after conditions have cleared.
Consequence: Continuing alerts after condition has resolved.
Fix: Always start the metrics publishing with a zero count before finding any items with conditions.
Result: When conditions clear, the metrics will now reflect the actual state which will clear any alerts.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1783963 (view as bug list) | Environment: | ||
| Last Closed: | 2020-05-13 21:54:13 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1776700, 1783963 | |||
|
Description
Vadim Rutkovsky
2019-12-09 10:52:05 UTC
I've tested this issue during upgrade from 4.4.0-0.nightly-2019-12-14-103510 to 4.4.0-0.nightly-2019-12-14-103510. Currently I don't observe reported failures by cco. We will leave cluster running for some days to check if failures are appeared for the some period Happened 3 times over the weekend, mostly on upgrade jobs: * https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/periodic-ci-openshift-osde2e-master-e2e-int-4.3/650 * https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/12666 * https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/6396/rehearse-6396-pull-ci-openshift-cluster-kube-apiserver-operator-master-e2e-aws-upgrade/2 As I can see the target release is 4.4 for this fix. Could you please check it on 4.4 too? https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/6396/rehearse-6396-pull-ci-openshift-cluster-kube-apiserver-operator-master-e2e-aws-upgrade/2 is 4.4 and so is https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/12666 (this is 4.4 nightly -> 4.4 nightly upgrade). However both ran on Dec 13 payloads, so the PR might not have merged by that time. Lets give it a few more days to run. Verified on 4.4.0-0.nightly-2019-12-14-103510. I've checked logs on cco pod after two days after install and did not observe this issue. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581 |