Refer to the bug 1947801#c4 steps, only found $ cat dep.json | jq -r '.user.username+": "+.requestURI' | sort | uniq | grep customresourcedefinitions system:serviceaccount:openshift-cluster-version:default: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/credentialsrequests.cloudcredential.openshift.io Above api request doesn't belong to management console, it belongs to OLM component, will file new bug on OLM about this and closed this bug.
*** Bug 1965947 has been marked as a duplicate of this bug. ***
Checked above request, it comes from APIService, so change the bug component to openshift-apiserver. $ oc get apiservice v1beta1.apiextensions.k8s.io -o yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: creationTimestamp: "2021-05-31T02:03:44Z" labels: kube-aggregator.kubernetes.io/automanaged: onstart name: v1beta1.apiextensions.k8s.io resourceVersion: "6" uid: c6f72333-0a71-4669-9df1-b2195b1715ab spec: group: apiextensions.k8s.io groupPriorityMinimum: 16700 version: v1beta1 versionPriority: 9 status: conditions: - lastTransitionTime: "2021-05-31T02:03:44Z" message: Local APIServices are always available reason: Local status: "True" type: Available
The APIRemovedInNextReleaseInUse alert caused by this request https://bugzilla.redhat.com/show_bug.cgi?id=1952049#c2 still exists in following OCP payload, so assigned bug back. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-29-114625 True False 8h Cluster version is 4.8.0-0.nightly-2021-05-29-114625
Seems original issue only exists when helm related things is deployed (I do not know how to deploy, though) [1]. So to verify this, seems please need also deploy helm related things. [1] Like the logging bug 1960549 , only exists when logging is deployed.
Mopving to OLM since the original BZ that this was clone from is opened agains OLM component
Hi Jakub, Sorry, I'm confused, as you can see below, the "APIRemovedInNextReleaseInUse" alerts are the two APIService resources themselves: "apiextensions.k8s.io" and "extensions", because they used the "v1beta1", not "v1". And, the APIService is the aggregated API that managed by the APIServer, so, I transfer it to the Master team. Correct me if I'm wrong, thanks! mac:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-29-114625 True False 23h Cluster version is 4.8.0-0.nightly-2021-05-29-114625 mac:~ jianzhang$ curl -k -H "Authorization: Bearer $(oc -n openshift-monitoring sa get-token prometheus-k8s)" https://alertmanager-main-openshift-monitoring.apps.jiazha31.qe.devcluster.openshift.com/api/v1/alerts | jq | grep -i "APIRemovedInNextReleaseInUse" -A5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7385 0 7385 0 0 6344 0 --:--:-- 0:00:01 --:--:-- 6344 "alertname": "APIRemovedInNextReleaseInUse", "group": "apiextensions.k8s.io", "prometheus": "openshift-monitoring/k8s", "resource": "customresourcedefinitions", "severity": "info", "version": "v1beta1" -- "alertname": "APIRemovedInNextReleaseInUse", "group": "extensions", "prometheus": "openshift-monitoring/k8s", "resource": "ingresses", "severity": "info", "version": "v1beta1" mac:~ jianzhang$ oc get apiservice|grep extensions v1.apiextensions.k8s.io Local True 23h v1beta1.apiextensions.k8s.io Local True 23h v1beta1.extensions Local True 23h mac:~ jianzhang$ oc get apiservice v1beta1.apiextensions.k8s.io -o yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: creationTimestamp: "2021-05-31T02:54:37Z" labels: kube-aggregator.kubernetes.io/automanaged: onstart name: v1beta1.apiextensions.k8s.io resourceVersion: "6" uid: 5a77c1f3-dfb7-4be5-9205-3bd44a1d7ee3 spec: group: apiextensions.k8s.io groupPriorityMinimum: 16700 version: v1beta1 versionPriority: 9 status: conditions: - lastTransitionTime: "2021-05-31T02:54:37Z" message: Local APIServices are always available reason: Local status: "True" type: Available mac:~ jianzhang$ oc get apiservice v1beta1.extensions -o yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: creationTimestamp: "2021-05-31T02:54:37Z" labels: kube-aggregator.kubernetes.io/automanaged: onstart name: v1beta1.extensions resourceVersion: "35" uid: 3aef5837-7638-4cff-a6cc-7cf27d2a550f spec: group: extensions groupPriorityMinimum: 17150 version: v1beta1 versionPriority: 1 status: conditions: - lastTransitionTime: "2021-05-31T02:54:37Z" message: Local APIServices are always available reason: Local status: "True" type: Available
I checked the api request, transfer it to the CVO team first for a look. mac:~ jianzhang$ oc get apirequestcount|grep "1.22" customresourcedefinitions.v1beta1.apiextensions.k8s.io 1.22 10 372 ingresses.v1beta1.extensions 1.22 11 379 roles.v1beta1.rbac.authorization.k8s.io 1.22 0 2 mac:~ jianzhang$ oc get apirequestcount customresourcedefinitions.v1beta1.apiextensions.k8s.io -o yaml apiVersion: apiserver.openshift.io/v1 kind: APIRequestCount metadata: creationTimestamp: "2021-05-31T03:04:38Z" generation: 1 name: customresourcedefinitions.v1beta1.apiextensions.k8s.io resourceVersion: "596310" uid: 1a000031-5beb-4ffa-9878-534df8824bb2 spec: numberOfUsersToReport: 10 status: currentHour: byNode: - byUser: - byVerb: - requestCount: 9 verb: get requestCount: 9 userAgent: cluster-version-operator/v0.0.0 username: system:serviceaccount:openshift-cluster-version:default nodeName: 10.0.145.67 requestCount: 9 requestCount: 9 all of them from the cluster-version-operator. mac:~ jianzhang$ oc get apirequestcount ingresses.v1beta1.extensions -o yaml apiVersion: apiserver.openshift.io/v1 kind: APIRequestCount metadata: creationTimestamp: "2021-05-31T03:04:38Z" generation: 1 name: ingresses.v1beta1.extensions resourceVersion: "598204" uid: 96adbe8d-c7d3-4e27-86c3-3b87e3cf508b spec: numberOfUsersToReport: 10 status: currentHour: byNode: - byUser: - byVerb: - requestCount: 6 verb: watch requestCount: 6 userAgent: kube-controller-manager/v1.21.0 username: system:kube-controller-manager - byVerb: - requestCount: 5 verb: watch requestCount: 5 userAgent: cluster-policy-controller/v0.0.0 username: system:kube-controller-manager nodeName: 10.0.145.67 requestCount: 11 requestCount: 11 all of them from the kube-controller-manager. mac:~ jianzhang$ oc get apirequestcount roles.v1beta1.rbac.authorization.k8s.io -o yaml apiVersion: apiserver.openshift.io/v1 kind: APIRequestCount metadata: creationTimestamp: "2021-05-31T03:04:38Z" generation: 1 name: roles.v1beta1.rbac.authorization.k8s.io resourceVersion: "9063" uid: 02c57313-0a50-4f56-b493-226164864cee spec: numberOfUsersToReport: 10 status: currentHour: byNode: - nodeName: 10.0.28.33 requestCount: 0 requestCount: 0 last24h: - byNode: - nodeName: 10.0.28.33 requestCount: 0 requestCount: 0 all of them from node 10.0.28.33, not sure which component.
This BZ is only about system:serviceaccount:openshift-cluster-version:default: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/helmchartrepositories.helm.openshift.io Please don't mix it up with other offenders. For those we need further BZs under https://bugzilla.redhat.com/show_bug.cgi?id=1947719.
Thanks! Stefan, The HelmChartRepository did use the `v1beta1`, mac:must-gather jianzhang$ oc get crd helmchartrepositories.helm.openshift.io -o yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" creationTimestamp: "2021-06-01T07:48:33Z" generation: 1 name: helmchartrepositories.helm.openshift.io resourceVersion: "767" uid: e5a030bd-da99-4a90-9a66-84794c64b55c spec: conversion: strategy: None group: helm.openshift.io names: kind: HelmChartRepository listKind: HelmChartRepositoryList plural: helmchartrepositories singular: helmchartrepository scope: Cluster versions: - name: v1beta1 ... The CRs: mac:must-gather jianzhang$ oc get HelmChartRepository NAME AGE example 3m57s redhat-helm-repo 125m One more concern is that there is no specifc CRD info in the mertics, it's hard to find which CRD is wrong. Is it as expected? Thanks! mac:must-gather jianzhang$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $(oc -n openshift-monitoring sa get-token prometheus-k8s)" 'https://10.0.136.62:6443/metrics' | grep apiserver_requested_deprecated_apis % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0# HELP apiserver_requested_deprecated_apis [ALPHA] Gauge of deprecated APIs that have been requested, broken out by API group, version, resource, subresource, and removed_release. # TYPE apiserver_requested_deprecated_apis gauge apiserver_requested_deprecated_apis{group="apiextensions.k8s.io",removed_release="1.22",resource="customresourcedefinitions",subresource="",version="v1beta1"} 1 apiserver_requested_deprecated_apis{group="extensions",removed_release="1.22",resource="ingresses",subresource="",version="v1beta1"} 1 apiserver_requested_deprecated_apis{group="policy",removed_release="1.25",resource="podsecuritypolicies",subresource="",version="v1beta1"} 1
> The HelmChartRepository did use the `v1beta1`, > apiVersion: apiextensions.k8s.io/v1 > spec: > ... > versions: > - name: v1beta1 Jian Zhang , hi, this issue is NOT about the v1beta1 under 'versions', rather, it is about the value after 'apiVersion': apiextensions.k8s.io/v1. I see https://github.com/openshift/api/pull/907/files now fixed apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1. Waiting helm to bump this fix.
(In reply to Xingxing Xia from comment #13) > Waiting helm to bump this fix. Oh, I don't know whether helm already bumped the fix or not.
Hi Xingxing, Thanks! But, I guess you mix it up with others. In the beginning, as I explained in comment 8, 9, I found there are API requests to the "apiextensions.k8s.io/v1beta1", not the "v1". And, these requests from the CVO, kube-controller-manager, and node. I thought this bug is for fixing them. But, as Stefan said in comment 10, this bug only traces the Helm CRD issue. And then, I checked the Helm CRD, as I commented in comment 11, it did use the "v1beta1" version that not allowed. But, my concern is why the Prometheus metrics or even the APIRequestActount does not point out the Helm CRD issue directly, so I asked Stefan for more details.
To be more clear, one more my concern is: Let’s imagine this scenario, as a cluster admin, the user get an alert from the Prometheus, and then, he/she login in the cluster, and check where is wrong. And then, find some wrong with the CRD, but the admin user doesn’t know which specific CRD is wrong. He/She has to take more time to check which CRD is not available via checking the audit.log. @Stefan Do you happen to know why not report the specific CRD info(name) directly in the metrics to save users' operations? Thanks! # TYPE apiserver_requested_deprecated_apis gauge apiserver_requested_deprecated_apis{group="apiextensions.k8s.io",removed_release="1.22",resource="customresourcedefinitions",subresource="",version="v1beta1"} 1
@Jian check the parent BZ for all relevant components (like kcm): https://bugzilla.redhat.com/show_bug.cgi?id=1947719. Also compare OLM BZ https://bugzilla.redhat.com/show_bug.cgi?id=1958296 where they work on notifying the user about the actual operator that is at fault, rather than telling the user that OLM does questionable API accesses.
helmchartrepositories.helm.openshift.io seems to be fixed. Moving to MODIFIED.
Launched 4.8.0-0.nightly-2021-06-14-145150 env, it automatically installed helmchartrepositories.helm.openshift.io. Then verified: $ MASTERS=`oc get no | grep master | grep -o '^[^ ]*'` $ for i in $MASTERS; do oc debug no/$i -- chroot /host bash -c "grep -hE customresourcedefinitions/helmchart /var/log/kube-apiserver/audit*.log" ; done > crd_helmchart.log $ vi crd_helmchart.log # search v1beta1, the result is empty The result is empty, this means https://github.com/openshift/api/pull/907 is bumped, the issue fixed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438