Bug 1747479
Summary: | kube-apiserver: excessive logging "OpenAPI AggregationController: Processing item" | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Seth Jennings <sjenning> |
Component: | kube-apiserver | Assignee: | Stefan Schimanski <sttts> |
Status: | CLOSED ERRATA | QA Contact: | Ke Wang <kewang> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.2.0 | CC: | aos-bugs, fshaikh, mfojtik, ocasalsa, rhowe, sttts, xxia |
Target Milestone: | --- | ||
Target Release: | 4.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-05-04 11:13:32 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Seth Jennings
2019-08-30 14:21:03 UTC
*** Bug 1747268 has been marked as a duplicate of this bug. *** Hi team, We have a customer who is facing the same issue on OCP 4.2.7 Excessive logging (36 msgs/second) of messages below... ...I1125 14:02:12.550102 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001 As of now, he has re-deployed the cluster and is not facing the issue in the new one. Thanks, Fatima Hi, Since OCP 4.3 is released is this bug fixed there? Any updates would be appreciated. Thanks, Fatima This still happens in 4.3.0 I0203 16:00:18.926067 1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io I0203 16:00:20.980164 1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io I0203 16:00:23.470490 1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io I0203 16:00:25.661778 1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io I0203 16:00:27.330009 1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com I0203 16:00:29.283581 1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io I0203 16:00:31.461177 1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io I0203 16:00:33.648494 1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io I0203 16:00:35.687155 1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io I0203 16:00:38.080755 1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io I0203 16:00:40.981177 1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io I0203 16:00:42.906006 1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io I0203 16:01:02.518120 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E0203 16:01:02.538167 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist I0203 16:01:02.538341 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I0203 16:01:18.930814 1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io Verified with OCP build 4.4.0-0.nightly-2020-02-13-103342, searching keyword 'OpenAPI AggregationController' in https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-serial-4.4/813/artifacts/e2e-openstack-serial/pods/openshift-kube-apiserver_kube-apiserver-rtvq59zz-bc4a8-9zq6p-master-0_kube-apiserver.log, the similar msg 'OpenAPI AggregationController: Processing item ...' cannot be found. Hello, I know that this issue is at this moment to be fixed in OCP 4.4, but at this point, people using 4.2 hadn't upgrade path dues the BZ1810036. Could be this backported to 4.2? This behaviour severely limits the ability to use the centralized logging feature since it's flooding all the logs. Thank you in advance. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |