Bug 2098621
| Summary: | Warning on OCP 4.10: deprecated api | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Ramon Gordillo <ramon.gordillo> |
| Component: | rook | Assignee: | Subham Rai <srai> |
| Status: | CLOSED ERRATA | QA Contact: | Mugdha Soni <musoni> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.10 | CC: | ebenahar, madam, muagarwa, musoni, ocs-bugs, odf-bz-bot, paarora, srai, tnielsen |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.12.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 4.11.0-124 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-01-31 00:19:21 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
this looks similar to this bz https://bugzilla.redhat.com/show_bug.cgi?id=2079919 which is also talking about v1beta1 cronJob and commented on that bz https://bugzilla.redhat.com/show_bug.cgi?id=2079919#c3 ``` This is expected since, before creating v1 resources for cronJOb we delete v1beta1 that's why we are seeing this warning and in the warning also we can confirm that it is coming for delete only `'verb': 'delete'` as expected. So, I think we can close this BZ if we agree. ``` It is raising alerts in OCP. If the jobs has been migrated, why keep on trying every some hours? The OCP admins are not happy with the alerts. Only regression runs are required. Hi Subham I would request you to share detailed steps to reproduce / validate the fix . Thanks Mugdha Soni To verify this bz, install odf on OCP 4.10 or newer which has k8s version 1.23, and check in the rook-operator logs that we don't have more than one occurrence of v1beta1 cronjob warning. I'm not sure about the OCP warning and how to validate that. But according to the bz report the output of `oc get apirequestcounts cronjobs.v1beta1.batch -o yaml` should report the count. @ramon.gordillo any idea how to validate from OCP alerts ? Hi Subham
I installed cluster with
OCP : 4.11.0-0.nightly-2022-08-11-023608
ODF : 4.11.0-135
[root@localhost tes]# oc version
Client Version: 4.5.6
Server Version: 4.11.0-0.nightly-2022-08-11-023608
Kubernetes Version: v1.24.0+9546431
-------------------------------------------------------------------------------------
Output of "oc get apirequestcounts cronjobs.v1beta1.batch -o yaml"
[root@localhost tes]# oc get apirequestcounts cronjobs.v1beta1.batch -o yaml
apiVersion: apiserver.openshift.io/v1
kind: APIRequestCount
metadata:
creationTimestamp: "2022-08-11T12:28:07Z"
generation: 1
managedFields:
- apiVersion: apiserver.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:numberOfUsersToReport: {}
manager: kube-apiserver
operation: Update
time: "2022-08-11T12:28:07Z"
- apiVersion: apiserver.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:currentHour:
.: {}
f:byNode: {}
f:requestCount: {}
f:last24h: {}
f:removedInRelease: {}
f:requestCount: {}
manager: kube-apiserver
operation: Update
subresource: status
time: "2022-08-11T12:28:07Z"
name: cronjobs.v1beta1.batch
resourceVersion: "95375"
uid: 203a6ea8-8436-4273-9ced-27f321536f4c
spec:
numberOfUsersToReport: 10
status:
currentHour:
byNode:
- byUser:
- byVerb:
- requestCount: 27
verb: delete
requestCount: 27
userAgent: rook/v0.0.0
username: system:serviceaccount:openshift-storage:rook-ceph-system
nodeName: 10.1.160.77
requestCount: 27
requestCount: 27
last24h:
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- byUser:
- byVerb:
- requestCount: 27
verb: delete
requestCount: 27
userAgent: rook/v0.0.0
username: system:serviceaccount:openshift-storage:rook-ceph-system
nodeName: 10.1.160.77
requestCount: 27
requestCount: 27
- requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
- byNode:
- nodeName: 10.1.160.77
requestCount: 0
requestCount: 0
removedInRelease: "1.25"
requestCount: 27
Please let me know about the alerts which were generating and where to check it .
Thanks
Mugdha
@musoni Please mark this failed on QA since we are still seeing `requestCount: 27` while we're debugging more. Thanks. Based on comment #14 moving this bug to failed qa . Thanks Mugdha Moving out of 4.11 This is in the 4.12 builds. To test this we need a Kubernetes version older than 1.25 since v1beta1 cronjob is removed in k8s1.25. We can try testing on OCP 4.11 which has <1.25 k8s version IIRC Hi Subham
I re-tested with the following versions :-
[root@localhost bugs]# oc version
Client Version: 4.5.6
Server Version: 4.11.0-0.nightly-2022-10-26-170309
Kubernetes Version: v1.24.6+5157800
[root@localhost bugs]# oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-10-26-170309 True False 150m Cluster version is 4.11.0-0.nightly-2022-10-26-170309
[root@localhost bugs]# oc get csv odf-operator.v4.12.0 -o yaml -n openshift-storage| grep full_version
full_version: 4.12.0-82
f:full_version: {}
But i see an error :-
[root@localhost bugs]# oc get apirequestcounts cronjobs.v1beta1.batch -o yaml
Error from server (NotFound): apirequestcounts.apiserver.openshift.io "cronjobs.v1beta1.batch" not found
Please correct me if am wrong and feel free to move the bug back to on_qa.
Thanks and Regards
Mugdha Soni
(In reply to Mugdha Soni from comment #22) > Hi Subham > > I re-tested with the following versions :- > > [root@localhost bugs]# oc version > Client Version: 4.5.6 > Server Version: 4.11.0-0.nightly-2022-10-26-170309 > Kubernetes Version: v1.24.6+5157800 > > > [root@localhost bugs]# oc get clusterversion > NAME VERSION AVAILABLE PROGRESSING > SINCE STATUS > version 4.11.0-0.nightly-2022-10-26-170309 True False > 150m Cluster version is 4.11.0-0.nightly-2022-10-26-170309 > > > > [root@localhost bugs]# oc get csv odf-operator.v4.12.0 -o yaml -n > openshift-storage| grep full_version > full_version: 4.12.0-82 > f:full_version: {} > > > But i see an error :- > > [root@localhost bugs]# oc get apirequestcounts cronjobs.v1beta1.batch -o yaml > Error from server (NotFound): apirequestcounts.apiserver.openshift.io > "cronjobs.v1beta1.batch" not found > > > Please correct me if am wrong and feel free to move the bug back to on_qa. > > > Thanks and Regards > Mugdha Soni . I think we can move it to verified since our goal was to not have v1beta1 JOb API request and have v1 cronJob required only. And from the above command, we can see there were no v1beta1 request. I got the cluster offline from you and in that cluster if we run v1 cronJob api request we can see rook is making v1 cronjob api request, here ``` oc get apirequestcounts cronjobs.v1.batch -oyaml apiVersion: apiserver.openshift.io/v1 kind: APIRequestCount metadata: creationTimestamp: "2022-10-27T08:39:01Z" generation: 1 name: cronjobs.v1.batch resourceVersion: "118973" uid: 7a882223-12fd-451f-b799-7bf6cb6f10f6 spec: numberOfUsersToReport: 10 status: currentHour: byNode: - byUser: - byVerb: - requestCount: 28 verb: delete requestCount: 28 userAgent: rook/v0.0.0 username: system:serviceaccount:openshift-storage:rook-ceph-system ``` so I think we are good here. I marking needinfo on @tnielsen to keep me honest here. Thanks Agreed, it looks expected that rook is calling the v1 api Based on comment #23 and #24 moving this bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.12.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:0551 |
Description of problem (please be detailed as possible and provide log snippests): Rook seems to still use a deprecated API for OCP 4.10 (k8s 1.23.5): -------------------------------- Deprecated API that will be removed in the next EUS version is being used. Removing the workload that is using the batch.v1beta1/cronjobs API might be necessary for a successful upgrade to the next EUS cluster version. Refer to `oc get apirequestcounts cronjobs.v1beta1.batch -o yaml` to identify the workload. -------------------------------- oc get apirequestcounts cronjobs.v1beta1.batch -o yaml apiVersion: apiserver.openshift.io/v1 kind: APIRequestCount metadata: creationTimestamp: "2022-04-12T16:37:50Z" generation: 1 name: cronjobs.v1beta1.batch resourceVersion: "78306097" uid: a8dddafd-2b9b-459f-9267-298b86b334cd spec: numberOfUsersToReport: 10 ... - byNode: - nodeName: 10.39.176.101 requestCount: 0 - byUser: - byVerb: - requestCount: 3 verb: delete requestCount: 3 userAgent: rook/v0.0.0 username: system:serviceaccount:openshift-storage:rook-ceph-system nodeName: 10.39.176.102 requestCount: 3 - nodeName: 10.39.176.103 requestCount: 0 requestCount: 3 ... -------------------------------- Version of all relevant components (if applicable): OCP 4.10.17 OCS 4.10.3 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No. However, it could be a potential issue in an upgrade. Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Always Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: