Bug 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards
Summary: OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: OCS 4.6.0
Assignee: Jose A. Rivera
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-23 14:52 UTC by Neha Berry
Modified: 2020-12-17 06:23 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 06:23:00 UTC
Embargoed:


Attachments (Terms of Use)
ocs-operator-csv-OB-OBC-ownership (141.24 KB, text/plain)
2020-08-04 19:39 UTC, Neha Berry
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:5605 0 None None None 2020-12-17 06:23:39 UTC

Description Neha Berry 2020-07-23 14:52:11 UTC
Description of problem (please be detailed as possible and provide log
snippests):
----------------------------------------------------------------------
Since OCS 4.5 , the OB and OBCs are no longer owned by LBP. Hence, Lib-bucket provisioner Operator and pod should not be deployed as part of OCS 4.6 deployment.

But as seen in OCS 4.6 deployments, the LBP  is still getting deployed

>> PODS
lib-bucket-provisioner-6d56499d9f-hw7jj                           1/1     Running                      0          19m   10.131.0.9     ip-10-0-138-152.us-west-1.compute.internal   <none>           <none>

>>CSV
lib-bucket-provisioner.v1.0.0.yaml	
ocs-operator.v4.6.0-26.ci.yaml

Logs: https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ocs-ci/521/artifact/logs/failed_testcase_ocs_logs_1595506351/deployment_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-260c9c0e2cee6fdb87bf20fd11417483e9da9fa1fb6de1efd6ff2b0b9761d850/

Version of all relevant components (if applicable):
----------------------------------------------------------------------
OCS =   4.6.0-26.ci /  ocs-olm-operator:4.6.0-504.ci
OCP  = 4.6.0-0.nightly-2020-07-23-080857

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
----------------------------------------------------------------------
No but it seems the fixes for OCS 4.5(more details here  - Bug 1798571) are not brought forward to OCS 4.6.

Is there any workaround available to the best of your knowledge?
----------------------------------------------------------------------
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
----------------------------------------------------------------------
3

Can this issue reproducible?
----------------------------------------------------------------------
Yes

Can this issue reproduce from the UI?
----------------------------------------------------------------------
Not tested

If this is a regression, please provide more details to justify this:
----------------------------------------------------------------------
Yes. If we consider from OCS 4.5 onwards, we do not need to deploy LBP

Steps to Reproduce:
----------------------------------------------------------------------
1. Install OCP 4.6
2. Install OCS 4.6 via ocs-ci 
3. Check the LBP pod and CSV exists (they should not)

Actual results:
----------------------------------------------------------------------
LBP pod and CSV exist.

Expected results:
----------------------------------------------------------------------
The OB and OBC should be owned and not required. LBP should no longer be needed.

Additional info:
----------------------------------------------------------------------

As seen in the CSV, the OBC and OB are under "required:" instead of "owned:"

    required:
    - description: Claim a bucket just like claiming a PV. Automate you app bucket
        provisioning by creating OBC with your app deployment. A secret and configmap
        (name=claim) will be created with access details for the app pods.
      displayName: Object Bucket Claim
      kind: ObjectBucketClaim
      name: objectbucketclaims.objectbucket.io
      resources:
      - kind: Service
        name: services
        version: v1
      - kind: Secret
        name: secrets
        version: v1
      - kind: ConfigMap
        name: configmaps
        version: v1
      - kind: StatefulSet
        name: statefulsets.apps
        version: v1
      version: v1alpha1
    - description: Used under-the-hood. Created per ObjectBucketClaim and keeps provisioning
        information.
      displayName: Object Bucket
      kind: ObjectBucket
      name: objectbuckets.objectbucket.io
      resources:
      - kind: Service
        name: services
        version: v1
      - kind: Secret
        name: secrets
        version: v1
      - kind: ConfigMap
        name: configmaps
        version: v1
      - kind: StatefulSet
        name: statefulsets.apps
        version: v1
      version: v1alpha1
  description: |2

Comment 3 Jose A. Rivera 2020-08-03 15:08:09 UTC
This was missed because the release-4.6 branch of ocs-operator was not yet synchronized with master to a point where the fix was already made. This sync has since happened and made it into a build for OCS 4.6. Moving to ON_QA

Comment 4 Neha Berry 2020-08-04 19:39:14 UTC
Created attachment 1710412 [details]
ocs-operator-csv-OB-OBC-ownership

Verified in ocs-operator.v4.6.0-36.ci. The LBP CSV and the pod is not created in the OCS 4.6 latest build. Hence, moving the BZ to verified state.

Logs folder - https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ocs-ci/539/artifact/logs/failed_testcase_ocs_logs_1596541019/test_deployment_ocs_logs/ocs_must_gather/

The OB and OBCs are under owned: [1]

spec:
  apiservicedefinitions: {}
  customresourcedefinitions:
    owned:


[1] - https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ocs-ci/539/artifact/logs/failed_testcase_ocs_logs_1596541019/test_deployment_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-6bc402f2e92f1b5c72c4360b8da5aa6bfe91ee3634a608a937aa5eddab45598e/ceph/namespaces/openshift-storage/operators.coreos.com/clusterserviceversions/ocs-operator.v4.6.0-36.ci.yaml/*view*/



>>oc get csv
NAME                        DISPLAY                       VERSION       REPLACES   PHASE
ocs-operator.v4.6.0-36.ci   OpenShift Container Storage   4.6.0-36.ci              Succeeded

>>oc get pods -o wide

csi-cephfsplugin-2jflv                                            3/3     Running     0          8m26s   10.0.235.238   ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
csi-cephfsplugin-7x46p                                            3/3     Running     0          8m26s   10.0.168.135   ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
csi-cephfsplugin-bbc5r                                            3/3     Running     0          8m26s   10.0.172.27    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
csi-cephfsplugin-provisioner-5c8f64c977-47m7c                     5/5     Running     0          8m26s   10.129.2.15    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
csi-cephfsplugin-provisioner-5c8f64c977-ktdl2                     5/5     Running     0          8m26s   10.128.2.8     ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-gvwtn                                               3/3     Running     0          8m27s   10.0.168.135   ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-j9c4s                                               3/3     Running     0          8m27s   10.0.235.238   ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-l8rpf                                               3/3     Running     0          8m27s   10.0.172.27    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
csi-rbdplugin-provisioner-78bf66999-45n7r                         6/6     Running     0          8m26s   10.128.2.7     ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-provisioner-78bf66999-nv6fr                         6/6     Running     0          8m26s   10.131.0.23    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
noobaa-core-0                                                     1/1     Running     0          5m16s   10.131.0.32    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
noobaa-db-0                                                       1/1     Running     0          5m16s   10.129.2.23    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
noobaa-endpoint-6d84cf4645-49nqw                                  1/1     Running     0          3m29s   10.129.2.24    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
noobaa-operator-6c8489d556-8nm2w                                  1/1     Running     0          9m6s    10.129.2.12    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
ocs-operator-6cb5977cb7-52ng5                                     1/1     Running     0          9m7s    10.129.2.13    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-crashcollector-ip-10-0-168-135-c5f67b4c5-hlbwg          1/1     Running     0          7m9s    10.128.2.15    ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
rook-ceph-crashcollector-ip-10-0-172-27-6b4fc8c646-dqf85          1/1     Running     0          6m55s   10.129.2.19    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-crashcollector-ip-10-0-235-238-7689557766-n6z67         1/1     Running     0          6m21s   10.131.0.30    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-drain-canary-800411b2bffc077f1e724b2666dc76a0-68t9bdr   1/1     Running     0          5m13s   10.129.2.21    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-drain-canary-85a40cc5f42ba13f517a273be730f279-869xpds   1/1     Running     0          5m14s   10.128.2.12    ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
rook-ceph-drain-canary-a42cc55b9ca869a4e6fa95bebb7822ee-5dgqh6j   1/1     Running     0          5m13s   10.131.0.33    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-bd5cb6d9hqklt   1/1     Running     0          4m59s   10.128.2.14    ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-59f98496d7gzb   1/1     Running     0          4m59s   10.131.0.35    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mgr-a-59f45c7599-cpjkt                                  1/1     Running     0          5m56s   10.129.2.18    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-mon-a-b7675d879-w2h25                                   1/1     Running     0          7m9s    10.128.2.10    ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
rook-ceph-mon-b-d4cb97979-shx2g                                   1/1     Running     0          6m56s   10.129.2.17    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-mon-c-6b9b694b9c-2whqz                                  1/1     Running     0          6m21s   10.131.0.29    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-operator-584998d899-5d4vg                               1/1     Running     0          9m6s    10.129.2.14    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-osd-0-9648c4785-b65gp                                   1/1     Running     0          5m24s   10.128.2.13    ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-1-5557674b5d-42ccd                                  1/1     Running     0          5m25s   10.131.0.34    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-2-84b59db885-frd28                                  1/1     Running     0          5m18s   10.129.2.22    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-0-data-0-frp99-99bxl          0/1     Completed   0          5m54s   10.131.0.31    ip-10-0-235-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-1-data-0-5cxpf-f9nh9          0/1     Completed   0          5m54s   10.128.2.11    ip-10-0-168-135.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-2-data-0-gm2gk-xz5sb          0/1     Completed   0          5m53s   10.129.2.20    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>
rook-ceph-tools-6f67984956-w9m62                                  1/1     Running     0          4m44s   10.0.172.27    ip-10-0-172-27.us-west-1.compute.internal    <none>           <none>

Comment 9 errata-xmlrpc 2020-12-17 06:23:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5605


Note You need to log in before you can comment on or make changes to this bug.