Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1978085

Summary: [flake]Alerts shouldn't report any alerts in firing or pending state apart from Watchdog and AlertmanagerReceiversNotConfigured and have no gaps in Watchdog firing
Product: OpenShift Container Platform Reporter: Junqi Zhao <juzhao>
Component: kube-apiserverAssignee: Stefan Schimanski <sttts>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Ke Wang <kewang>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.9CC: alegrand, amuller, anpicker, aos-bugs, erooth, kakkoyun, mfojtik, pkrupa, pnair, spasquie, xxia
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-02 09:50:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Junqi Zhao 2021-07-01 06:20:37 UTC
Description of problem:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.9-e2e-aws/1410150717401862144
[sig-instrumentation][Late] Alerts shouldn't report any alerts in firing or pending state apart from Watchdog and AlertmanagerReceiversNotConfigured and have no gaps in Watchdog firing [Suite:openshift/conformance/parallel] failed
Run #0: Failed expand_less 	12s
flake: Unexpected alert behavior during test:
alert KubeAPIErrorBudgetBurn pending for 756.5079998970032 seconds with labels: {long="3d", severity="warning", short="6h"}

same for
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.9-e2e-aws/1410217313440894976

alert details
********************************
      - alert: KubeAPIErrorBudgetBurn
        annotations:
          description: The API server is burning too much error budget. This alert fires
            when too many requests are failing with high latency. Use the 'API Performance'
            monitoring dashboards to narrow down the request states and latency. The 'etcd'
            monitoring dashboards also provides metrics to help determine etcd stability
            and performance.
          summary: The API server is burning too much error budget.
        expr: |
          sum(apiserver_request:burnrate3d) > (1.00 * 0.01000)
          and
          sum(apiserver_request:burnrate6h) > (1.00 * 0.01000)
        for: 3h
        labels:
          long: 3d
          severity: warning
          short: 6h
********************************

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
alert KubeAPIErrorBudgetBurn pending

Expected results:


Additional info:

Comment 1 Prashant Balachandran 2021-07-01 11:16:18 UTC
This seems like a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1976765

Comment 2 Simon Pasquier 2021-07-01 14:56:23 UTC
The KubeAPIErrorBudgetBurn alert is owned by the kube-apiserver operator since 4.9.

[1] https://github.com/openshift/cluster-kube-apiserver-operator/blob/978d8a39652385d7a179267950d5c638d95f5e7c/bindata/v4.1.0/alerts/kube-apiserver-slos.yaml#L10