Hide Forgot
Clusters deployed with the 1.18 origin rebase PR [1] have the following in the apiserver logs every 60s: controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. This problem doesn't appear to be limited to the rebase PR, as per a longstanding github issue [2]. Beyond eliminating the log spew, it would be helpful to know what functional impact - if any - these logged errors represent. 1: https://github.com/openshift/origin/pull/24719 2: https://github.com/coreos/kube-prometheus/issues/304
4.4.0-0.nightly-2020-03-29-132004 also reproduces: $ oc logs -n openshift-kube-apiserver kube-apiserver-ip-10-0-134-38.ap-northeast-2.compute.internal -c kube-apiserver ... I0331 02:35:46.842201 1 aggregator.go:226] Updating OpenAPI spec because v1.route.openshift.io is updated I0331 02:35:47.317974 1 aggregator.go:229] Finished OpenAPI spec generation after 475.760307ms I0331 02:35:47.318047 1 aggregator.go:226] Updating OpenAPI spec because v1beta1.metrics.k8s.io is updated I0331 02:35:47.818875 1 aggregator.go:229] Finished OpenAPI spec generation after 500.807236ms I0331 02:35:47.818956 1 aggregator.go:226] Updating OpenAPI spec because v1.user.openshift.io is updated I0331 02:35:48.273538 1 aggregator.go:229] Finished OpenAPI spec generation after 454.560076ms ...snipped... I0331 02:35:59.218097 1 aggregator.go:226] Updating OpenAPI spec because v1.route.openshift.io is updated I0331 02:36:00.378834 1 aggregator.go:229] Finished OpenAPI spec generation after 1.160710014s E0331 02:36:00.382542 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I0331 02:36:00.382560 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I0331 02:36:00.649455 1 aggregator.go:226] Updating OpenAPI spec because v1.user.openshift.io is updated ...snipped... I0331 02:36:08.140496 1 aggregator.go:226] Updating OpenAPI spec because v1.template.openshift.io is updated I0331 02:36:09.401918 1 aggregator.go:229] Finished OpenAPI spec generation after 1.261388097s E0331 02:37:00.386352 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I0331 02:37:00.386371 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E0331 02:39:00.397969 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I0331 02:39:00.397986 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E0331 02:40:44.528108 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I0331 02:40:44.528125 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E0331 02:41:44.536408 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I0331 02:41:44.536428 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E0331 02:43:44.543085 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist ...all following logs are only above spams now...
Like bug 1537270#c33 / bug 1819103 log spams annoying customer, this bug also annoys I think. Thus updating bug field.
*** Bug 1820468 has been marked as a duplicate of this bug. ***
FWIW, I just hit this issue in my metal UPI cluster after upgrading from 4.5.5 to 4.5.7. My cluster is firing the KubeAPIErrorsHigh alert, and the kube-apiserver log has the same messages from this report.
(In reply to Russell Bryant from comment #17) > FWIW, I just hit this issue in my metal UPI cluster after upgrading from > 4.5.5 to 4.5.7. My cluster is firing the KubeAPIErrorsHigh alert, and the > kube-apiserver log has the same messages from this report. actually it's not clear the alert and this log spam is related. I just happened to see them mentioned together in a slack conversation, which led me to this bug.
KubeAPIErrorsHigh and this issue are not related.
Setting priority to high, leaving severity to low as there is no functional impact. Priority is high as this also affects upstream functionality on a broader sense, the metrics-server API library is used in many places.
reproduced on 4.6.0-0.nightly-2020-10-08-182439 # oc -n openshift-kube-apiserver logs kube-apiserver-ip-10-0-131-187.eu-west-1.compute.internal -c kube-apiserver | grep "v1beta1.metrics.k8s.io" | tail -n 5 I1009 00:45:33.398180 17 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1009 00:47:33.401701 17 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1009 00:47:33.401718 17 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1009 00:49:33.396989 17 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I1009 00:49:33.397006 17 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Still not in any payload by now.
Issue exists in payload 4.7.0-0.nightly-2020-10-27-051128
tested with 4.7.0-0.nightly-2020-11-05-010603, no "OpenAPI spec does not exist" error now # for i in $(oc -n openshift-kube-apiserver get pod | grep kube-apiserver | awk '{print $1}'); do echo $i; oc -n openshift-kube-apiserver logs $i -c kube-apiserver | grep "OpenAPI spec does not exist" | tail -n 5; echo -e "\n";done kube-apiserver-ip-10-0-158-63.ap-southeast-2.compute.internal kube-apiserver-ip-10-0-180-210.ap-southeast-2.compute.internal kube-apiserver-ip-10-0-206-75.ap-southeast-2.compute.internal
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633