Bug 2093129 - Alerts shouldn't report any unexpected alerts in firing or pending state
Summary: Alerts shouldn't report any unexpected alerts in firing or pending state
Keywords:
Status: CLOSED DUPLICATE of bug 2093288
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.11
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: tflannag
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-03 03:39 UTC by Dan Williams
Modified: 2022-06-07 14:22 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-07 14:22:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dan Williams 2022-06-03 03:39:25 UTC
marketplace registry pods seem to be restarting frequently due to failed readiness probes. We see this in jobs on both SDN and OVN network plugins so it doesn't seem connected to platform networking.

https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cluster-network-operator/1472/pull-ci-openshift-cluster-network-operator-master-e2e-gcp/1532523122136190976

https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cluster-network-operator/1472/pull-ci-openshift-cluster-network-operator-master-e2e-gcp-ovn/1532523122173939712

{  1 events happened too frequently

event happened 23 times, something is wrong: ns/openshift-marketplace pod/redhat-operators-fxpkw node/ci-op-ishmnxnz-9a01a-m8n8c-master-1 - reason/Unhealthy Readiness probe failed: timeout: failed to connect service ":50051" within 1s
}

: [sig-arch] events should not repeat pathologically expand_less 	0s
{  1 events happened too frequently

event happened 27 times, something is wrong: ns/openshift-marketplace pod/community-operators-8bd8c node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/Unhealthy Readiness probe failed: timeout: failed to connect service ":50051" within 1s
}
: [sig-cluster-lifecycle] should not see excessive Back-off restarting failed containers expand_less 	0s
{  event [ns/openshift-marketplace pod/redhat-operators-45zdm node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/BackOff Back-off restarting failed container] happened 26 times
event [ns/openshift-marketplace pod/community-operators-8bd8c node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/BackOff Back-off restarting failed container] happened 26 times
event [ns/openshift-marketplace pod/redhat-operators-45zdm node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/BackOff Back-off restarting failed container] happened 67 times
event [ns/openshift-marketplace pod/redhat-operators-45zdm node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/BackOff Back-off restarting failed container] happened 85 times
event [ns/openshift-marketplace pod/redhat-operators-45zdm node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/BackOff Back-off restarting failed container] happened 102 times
event [ns/openshift-marketplace pod/redhat-operators-45zdm node/ci-op-ishmnxnz-b3a20-8659p-master-1 - reason/BackOff Back-off restarting failed container] happened 158 times}


Note You need to log in before you can comment on or make changes to this bug.