Bug 1850089 - OBC CRD is outdated and leads to missing columns in get queries
Summary: OBC CRD is outdated and leads to missing columns in get queries
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: OCS 4.7.0
Assignee: Jose A. Rivera
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-23 14:13 UTC by Ben Eli
Modified: 2023-09-15 00:33 UTC (History)
11 users (show)

Fixed In Version: 4.7.0-235.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-19 09:14:56 UTC
Embargoed:
dzaken: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2041 0 None None None 2021-05-19 09:16:07 UTC

Description Ben Eli 2020-06-23 14:13:10 UTC
Description of problem (please be detailed as possible and provide log
snippests):
When running `oc get obc` queries in 4.5, some of the columns that used to show up in 4.4 are missing. In an independent mode cluster I have they seem to work properly, but in a cluster deployed today with converged mode - they're still missing.

oc get obc on 4.4 - 
NAME                                             STORAGE-CLASS                             PHASE   AGE
rgw-oc-bucket-4339e498483c4f439b56f832988c8b49   ocs-independent-storagecluster-ceph-rgw   Bound   5d2h

oc get ob on 4.4 -
NAME                                                                   STORAGE-CLASS                             CLAIM-NAMESPACE     CLAIM-NAME                                       RECLAIM-POLICY   PHASE   AGE
obc-openshift-storage-rgw-oc-bucket-4339e498483c4f439b56f832988c8b49   ocs-independent-storagecluster-ceph-rgw   openshift-storage   rgw-oc-bucket-4339e498483c4f439b56f832988c8b49   Delete           Bound   5d2h

oc get obc on 4.5 -
NAME      AGE
testobc   5s

oc get ob on 4.5 - 
NAME                            AGE
obc-openshift-storage-testobc   2m36s

Version of all relevant components (if applicable):
OCS 4.5.0-460.ci
OCP 4.5.0-0.nightly-2020-06-23-020504

OCS 4.4.0-428.ci
OCP 4.4.0-0.nightly-2020-05-25-020741

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
Yes - use `oc get obc <name> -o yaml` and check the fields

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
No

If this is a regression, please provide more details to justify this:
Yes - in 4.4, the columns show up properly, in converged 4.5, they do not.

Steps to Reproduce:
1. Deploy a converged 4.5 cluster over AWS
2. Create a NooBaa/RGW OBC
3. Run `oc get obc`


Actual results:
The only columns that show up are NAME and AGE

Expected results:
Additional columns show (as shown in the description)

Additional info:

Comment 2 Jacky Albo 2020-06-23 14:37:09 UTC
Adding some info I got from Vu Dinh when I tried to figure out what's going on:

So it looks like the issue here is with v1beta using JSONPath but for v1 it is renamed to jsonPath. So during the conversion, the entire additionalPrinterColumns section is dropped somehow as you can it is missing in the CRD yaml that you query from apiserver.
I think you need to go ahead and update your CRDs to v1 format.
v1beta1 CRD is deprecated already so it is best to update CRD for 4.5+

Comment 6 Jose A. Rivera 2020-07-02 13:58:02 UTC
I believe the OBC CRD definitions are in the NooBaa Operator, so moving this to the MCG component.

Nimrod, can you take a look and see if this is something we need for OCS 4.5?

Comment 7 Nimrod Becker 2020-07-06 10:22:49 UTC
What about
deploy/olm-catalog/ocs-operator/manifests/objectbucket.crd.yaml
deploy/olm-catalog/ocs-operator/manifests/objectbucketclaim.crd.yaml

They are not just copied from the noobaa repo right? I might be missing something here, but if we take the CRDs defined in noobaa how are we not conflicting with the ones in rook? Isn't it that neither rook nor noobaa-operator brings them when in OCS and they are within the ocs-operator?

Comment 8 Jose A. Rivera 2020-07-07 15:05:46 UTC
We get the OB and OBC CRD manifests by running:

podman run <NOOBAA_IMAGE> crd yaml

So we're running a subcommand of the main entrypoint of the NooBaa image. We then use this output to generate the files indicated above. Thus, it falls on the NooBaa Operator to update the CRD generation, I believe somewhere around here: https://github.com/noobaa/noobaa-operator/blob/master/pkg/crd/crd.go

Moving back to MCG component.

Comment 9 Michael Adam 2020-07-10 11:54:42 UTC
There is a lack of updates here, but I was reading in an email, that this might require updates in lib-bucket-provisioner and rook.
Because of medium severity, i'm moving it out of 4.5.0

Please move back if there's disagreement.

Comment 10 Nimrod Becker 2020-09-08 06:00:35 UTC
@Jose
Do we want to update lib-bucket-provisioner ? Not sure its worth it as its deprecated and will be replaced with COSI in the not so far future...

Comment 11 Danny 2020-09-22 09:25:18 UTC
As mentioned before we use CRD definitions of v1beta1 which is deprecated in Kubernetes 1.19. this is the case for all OCS CRDs and not only OBC.

in OCP-4.6 there is an automatic conversion of the CRDs from v1beta1 to v1 and this issue doesn't occur.
I think we still need to update all CRDs to v1 and not just rely on the conversion.

this is not only a change in noobaa and should be synced between all OCS components. it also requires more work than just updating the YAMLs, since some fields have changed (e.g. spec.version was changed to spec.versions)

because of the automatic conversion in OCP4.6 I am not sure how critical it is for OCS 4.6.

Comment 12 Danny 2020-09-29 14:15:12 UTC
following the discussion in ocs operators status meeting (http://post-office.corp.redhat.com/archives/rhocs-eng/2020-September/msg00213.html) pushing to ocs 4.7

@ben this true for all OCS CRDs. let's make sure we have similar BZs for other components (rook, OCS-operator)

Comment 13 Elad 2020-10-05 12:08:49 UTC
Hi Danny,

> in OCP-4.6 there is an automatic conversion of the CRDs from v1beta1 to v1 and this issue doesn't occur.
> I think we still need to update all CRDs to v1 and not just rely on the conversion.

If this is the case, don't we need the fix in OCS 4.6 and even backported to 4.5.z?

Comment 14 Danny 2020-10-05 16:51:07 UTC
@elad, since in 4.6 the CRD is converted automatically, this issue will not happen. I think it makes it less critical to fix right now. 
updating the CRD is something that we better do regardless of the automatic conversion. since it requires some work from all components we decided to push it to 4.7

as for 4.5.z, I am not sure it is such a critical issue that requires us to handle it now

Comment 15 Ben Eli 2020-10-06 12:41:13 UTC
The topic was discussed via email, and it was decided to create two Jira stories, one for each component, and attach them to this epic -
https://issues.redhat.com/browse/KNIP-1491

Comment 17 Elad 2020-10-29 15:40:56 UTC
Yes, just note that the epic is targeted to 4.7

Comment 18 Nimrod Becker 2021-01-20 16:38:38 UTC
Since the epic is in FF, moving to on qe and to the ocs-op (according to the epic)

Comment 24 errata-xmlrpc 2021-05-19 09:14:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2041

Comment 25 Red Hat Bugzilla 2023-09-15 00:33:02 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.