Bug 1914994

Summary: Panic observed in k8s-prometheus-adapter since k8s 1.20
Product: OpenShift Container Platform Reporter: Damien Grisonnet <dgrisonn>
Component: MonitoringAssignee: Damien Grisonnet <dgrisonn>
Status: CLOSED ERRATA QA Contact: Junqi Zhao <juzhao>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.7CC: alegrand, anpicker, erooth, kakkoyun, lcosic, pkrupa
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-27 22:36:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Damien Grisonnet 2021-01-11 17:30:02 UTC
Description of problem:

Some CI failures caused by the `Undiagnosed panic detected in pod` origin test failing have been reportedly caused by prometheus-adapter. The latest report can be found in [1].
This is already the second time that this is reported since the rebase on Kubernetes 1.20. It was first though that these panics would be fixed by bumping all Kubernetes depedencies in prometheus-adapter to 1.20.0 but it seems that this panics are still occuring.
Although, the panics that were first reported in [2] are quite different and seems to have been fixed by https://github.com/openshift/k8s-prometheus-adapter/pull/41 as they don't seem to occur anymore.

Also, it's worth noting that this is a not just a one time flake in a thousand runs. For the past week, this CI job is responsible for 6% of all the failures and out of all this particular job failures, prometheus-adapter seems to be responsible for 71 of them out of 442 according to [3].

[1] https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cluster-kube-controller-manager-operator/491/pull-ci-openshift-cluster-kube-controller-manager-operator-master-e2e-upgrade/1347197449646641152/artifacts/e2e-upgrade/gather-extra/pods/openshift-monitoring_prometheus-adapter-7956dd46cf-h4d5t_prometheus-adapter.log
[2] https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_kubernetes/471/pull-ci-openshift-kubernetes-master-e2e-aws-selfupgrade/1338383564957290496/artifacts/e2e-aws-selfupgrade/gather-extra/pods/openshift-monitoring_prometheus-adapter-66d5b468c5-f6kmf_prometheus-adapter.log
[3] https://search.ci.openshift.org/?search=Undiagnosed+panic+detected+in+pod&maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job

One of which can be observed 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Damien Grisonnet 2021-01-18 15:40:02 UTC
The panic seems to have been fixed in upstream Kubernetes as part of this PR: https://github.com/kubernetes/kubernetes/pull/97820

There is currently a backport to 1.20 open https://github.com/kubernetes/kubernetes/pull/97862. Once merged, we will need to upgrade client-go in upstream k8s-prometheus-adapter to bring the fix downstream.

Comment 5 Junqi Zhao 2021-02-10 06:32:30 UTC
tested with 4.8.0-0.nightly-2021-02-09-221546, prometheus-adapter version is v0.8.3 now

Comment 8 errata-xmlrpc 2021-07-27 22:36:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438