Bug 1994687 - [vSphere]: csv ocs-registry:4.9.0-91.ci is in Installing phase
Summary: [vSphere]: csv ocs-registry:4.9.0-91.ci is in Installing phase
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.9.0
Assignee: Jose A. Rivera
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-17 17:18 UTC by Vijay Avuthu
Modified: 2023-08-09 17:00 UTC (History)
8 users (show)

Fixed In Version: v4.9.0-102.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-13 17:44:58 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:5086 0 None None None 2021-12-13 17:45:57 UTC

Description Vijay Avuthu 2021-08-17 17:18:12 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ocs-registry:4.9.0-91.ci is in Installing phase

Version of all relevant components (if applicable):

ocs-registry:4.9.0-91.ci
openshift installer (4.9.0-0.nightly-2021-08-16-154237)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Deployment fails on vSphere

Is there any workaround available to the best of your knowledge?
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
2/2

Can this issue reproduce from the UI?
Not tried

If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. Install OCS using ocs-ci
2. Verify operator status
3.


Actual results:

ocs-operator.v4.9.0-91.ci is in Installing Phase


Expected results:

ocs-operator.v4.9.0-91.ci is in Succeeded phase


Additional info:

> During verification phase, csv is still in installing phase

$ oc get csv
NAME                        DISPLAY                       VERSION       REPLACES   PHASE
ocs-operator.v4.9.0-91.ci   OpenShift Container Storage   4.9.0-91.ci              Installing

> $ oc describe csv ocs-operator.v4.9.0-91.ci
Name:         ocs-operator.v4.9.0-91.ci
Namespace:    openshift-storage
Labels:       olm.api.1cf66995ee5bab83=provided
              olm.api.345775c37f11b6ca=provided
              olm.api.38cd97520e769cdd=provided


    Last Transition Time:  2021-08-17T13:01:51Z
    Last Update Time:      2021-08-17T13:01:51Z
    Message:               installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
  Last Transition Time:    2021-08-17T13:01:51Z
  Last Update Time:        2021-08-17T13:01:51Z
  Message:                 installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability.
  Phase:                   Installing
  Reason:                  InstallWaiting


Events:
  Type     Reason               Age                   From                        Message
  ----     ------               ----                  ----                        -------
  Normal   RequirementsUnknown  114m (x3 over 115m)   operator-lifecycle-manager  requirements not yet checked
  Normal   RequirementsNotMet   114m (x2 over 114m)   operator-lifecycle-manager  one or more requirements couldn't be found
  Normal   InstallWaiting       114m (x2 over 114m)   operator-lifecycle-manager  installing: waiting for deployment rook-ceph-operator to become ready: deployment "rook-ceph-operator" not available: Deployment does not have minimum availability.
  Normal   InstallSucceeded     114m                  operator-lifecycle-manager  install strategy completed with no errors
  Warning  ComponentUnhealthy   113m (x2 over 113m)   operator-lifecycle-manager  installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability.
  Normal   NeedsReinstall       113m (x2 over 113m)   operator-lifecycle-manager  installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability.
  Normal   AllRequirementsMet   113m (x5 over 114m)   operator-lifecycle-manager  all requirements found, attempting install
  Normal   InstallSucceeded     113m (x4 over 114m)   operator-lifecycle-manager  waiting for install components to report healthy
  Normal   InstallWaiting       113m (x4 over 114m)   operator-lifecycle-manager  installing: waiting for deployment ocs-operator to become ready: deployment "ocs-operator" not available: Deployment does not have minimum availability.
  Warning  InstallCheckFailed   115s (x38 over 108m)  operator-lifecycle-manager  install timeout


> All pods are in running state

$ oc get pods
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-68sgp                                            3/3     Running     0          5h35m
csi-cephfsplugin-8482x                                            3/3     Running     0          5h35m
csi-cephfsplugin-provisioner-7b96dbcbff-49nh9                     6/6     Running     0          5h35m
csi-cephfsplugin-provisioner-7b96dbcbff-8v9ct                     6/6     Running     0          5h35m
csi-cephfsplugin-thd4g                                            3/3     Running     0          5h35m
csi-rbdplugin-4hl5n                                               3/3     Running     0          5h35m
csi-rbdplugin-d99zp                                               3/3     Running     0          5h35m
csi-rbdplugin-l9fbx                                               3/3     Running     0          5h35m
csi-rbdplugin-provisioner-8665ff549b-q7jz9                        6/6     Running     0          5h35m
csi-rbdplugin-provisioner-8665ff549b-xftdq                        6/6     Running     0          5h35m
noobaa-core-0                                                     1/1     Running     0          5h28m
noobaa-db-pg-0                                                    1/1     Running     0          5h28m
noobaa-endpoint-fdf697b46-k6cb7                                   1/1     Running     0          5h27m
noobaa-operator-5c66fffc54-4w462                                  1/1     Running     0          5h36m
ocs-metrics-exporter-5d9c9cdc6d-8xpf6                             1/1     Running     0          5h36m
ocs-operator-66f84d6945-k6mv2                                     0/1     Running     0          5h36m
rook-ceph-crashcollector-compute-0-6b78765694-vlfw9               1/1     Running     0          5h28m
rook-ceph-crashcollector-compute-1-6b484f6957-s4p6h               1/1     Running     0          5h29m
rook-ceph-crashcollector-compute-2-6c76f9c8fb-nspd5               1/1     Running     0          5h28m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-84dddc7b8v7n9   2/2     Running     0          5h28m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5dbc4d497w2f9   2/2     Running     0          5h28m
rook-ceph-mgr-a-7f8d5765ff-hgklr                                  2/2     Running     0          5h29m
rook-ceph-mon-a-678bf9d8b8-kvvcl                                  2/2     Running     0          5h35m
rook-ceph-mon-b-6f5ff8556d-m4hhw                                  2/2     Running     0          5h32m
rook-ceph-mon-c-6488464fb6-cgnqc                                  2/2     Running     0          5h32m
rook-ceph-operator-544c679545-l6mn4                               1/1     Running     0          5h36m
rook-ceph-osd-0-59b545d965-9fnpg                                  2/2     Running     0          5h29m
rook-ceph-osd-1-86f545c768-5h6fs                                  2/2     Running     0          5h28m
rook-ceph-osd-2-84d5c57fc5-f2jzj                                  2/2     Running     0          5h28m
rook-ceph-osd-prepare-ocs-deviceset-0-data-0mbs5c--1-twzn4        0/1     Completed   0          5h29m
rook-ceph-osd-prepare-ocs-deviceset-1-data-0vnnx5--1-8kf86        0/1     Completed   0          5h29m
rook-ceph-osd-prepare-ocs-deviceset-2-data-09fppk--1-rrfcz        0/1     Completed   0          5h29m
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-7545b79rvnc5   2/2     Running     0          5h28m
rook-ceph-tools-6f45fcf96f-lb945                                  1/1     Running     0          5h26m

> From rook-ceph-operator-544c679545-l6mn4 

2021-08-17 17:03:57.232095 I | cephclient: disabling ceph filesystem snapshot mirror for filesystem "ocs-storagecluster-cephfilesystem"
2021-08-17 17:03:57.566347 E | ceph-file-controller: failed to reconcile failed to disable mirroring on filesystem "ocs-storagecluster-cephfilesystem": failed to disable ceph filesystem snapshot mirror for filesystem "ocs-storagecluster-cephfilesystem". . Error ENOTSUP: Module 'mirroring' is not enabled (required by command 'fs snapshot mirror disable'): use `ceph mgr module enable mirroring` to enable it: exit status 95


> $ oc describe pod ocs-operator-66f84d6945-k6mv2 

Events:
  Type     Reason      Age                       From     Message
  ----     ------      ----                      ----     -------
  Warning  ProbeError  4m56s (x2205 over 5h29m)  kubelet  Readiness probe error: HTTP probe failed with statuscode: 500
body: [-]readyz failed: reason withheld
healthz check failed

> ocs-operator logs

{"level":"info","ts":1629220359.47824,"logger":"controllers.StorageCluster","msg":"Could not update StorageCluster status.","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster","StorageClu
ster":"openshift-storage/ocs-storagecluster"}
{"level":"error","ts":1629220359.4782882,"logger":"controller-runtime.manager.controller.storagecluster","msg":"Reconciler error","reconciler group":"ocs.openshift.io","reconciler kind":"StorageCluster","name":"
ocs-storagecluster","namespace":"openshift-storage","error":"Operation cannot be fulfilled on storageclusters.ocs.openshift.io \"ocs-storagecluster\": the object has been modified; please apply your changes to t
he latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.
(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:302\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processN
extWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/remote-so
urce/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/
wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source
/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/w
ait.JitterUntilWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util
/wait/wait.go:99"}


Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5292/consoleFull

must gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/vavuthu4-ocs49/vavuthu4-ocs49_20210817T101113/logs/failed_testcase_ocs_logs_1629196536/test_deployment_ocs_logs/

Comment 4 Travis Nielsen 2021-08-19 16:29:37 UTC
The rook release-4.9 branch was sync'd a couple days ago, although I'm not sure which build exactly would have the changes. Can you try on the latest build?

Comment 7 Jilju Joy 2021-08-19 18:11:18 UTC
Tested installation from UI and found similar issue.

Latest build is used for testing on VMware. 

$ oc get csv
NAME                         DISPLAY                       VERSION        REPLACES   PHASE
ocs-operator.v4.9.0-102.ci   OpenShift Container Storage   4.9.0-102.ci              Installing
odf-operator.v4.9.0-102.ci   OpenShift Data Foundation     4.9.0-102.ci              Succeeded

$ oc get storagecluster
NAME                 AGE     PHASE         EXTERNAL   CREATED AT             VERSION
odf-storage-system   5h40m   Progressing              2021-08-19T12:28:46Z   4.9.0



All pods are not created:
$ oc get pods -o wide -n openshift-storage
NAME                                                              READY   STATUS             RESTARTS          AGE     IP             NODE        NOMINATED NODE   READINESS GATES
csi-cephfsplugin-52hqs                                            3/3     Running            0                 5h20m   10.1.160.201   compute-1   <none>           <none>
csi-cephfsplugin-lp78n                                            3/3     Running            0                 5h20m   10.1.161.101   compute-2   <none>           <none>
csi-cephfsplugin-mrrrf                                            3/3     Running            0                 5h20m   10.1.161.104   compute-0   <none>           <none>
csi-cephfsplugin-provisioner-54fbb98c8f-clz4f                     6/6     Running            0                 5h20m   10.128.2.17    compute-2   <none>           <none>
csi-cephfsplugin-provisioner-54fbb98c8f-p98rv                     6/6     Running            0                 5h20m   10.131.0.42    compute-1   <none>           <none>
csi-rbdplugin-4tdxv                                               3/3     Running            0                 5h20m   10.1.161.104   compute-0   <none>           <none>
csi-rbdplugin-8kr6f                                               3/3     Running            0                 5h20m   10.1.161.101   compute-2   <none>           <none>
csi-rbdplugin-hlj9t                                               3/3     Running            0                 5h20m   10.1.160.201   compute-1   <none>           <none>
csi-rbdplugin-provisioner-84ccc64b48-22h82                        6/6     Running            0                 5h20m   10.131.0.41    compute-1   <none>           <none>
csi-rbdplugin-provisioner-84ccc64b48-k4rz6                        6/6     Running            0                 5h20m   10.129.2.12    compute-0   <none>           <none>
noobaa-core-0                                                     1/1     Running            0                 5h15m   10.129.2.16    compute-0   <none>           <none>
noobaa-db-pg-0                                                    0/1     Pending            0                 5h15m   <none>         <none>      <none>           <none>
noobaa-operator-66c6f88745-x7wb5                                  1/1     Running            0                 5h25m   10.128.2.14    compute-2   <none>           <none>
ocs-metrics-exporter-79f8949777-m6t4b                             1/1     Running            0                 5h25m   10.128.2.15    compute-2   <none>           <none>
ocs-operator-546fd6c668-6bwtg                                     0/1     Running            0                 5h25m   10.129.2.9     compute-0   <none>           <none>
odf-console-744c58ccd7-x2mps                                      2/2     Running            0                 5h25m   10.129.2.11    compute-0   <none>           <none>
odf-operator-controller-manager-8ff7c7b5c-4dm9h                   2/2     Running            0                 5h25m   10.128.2.13    compute-2   <none>           <none>
rook-ceph-crashcollector-compute-0-7bf548c9fc-6blbs               1/1     Running            0                 5h16m   10.129.2.15    compute-0   <none>           <none>
rook-ceph-crashcollector-compute-1-5b55b94666-6gjc2               1/1     Running            0                 5h15m   10.131.0.46    compute-1   <none>           <none>
rook-ceph-crashcollector-compute-2-58b844dbff-pnkh7               1/1     Running            0                 5h15m   10.128.2.24    compute-2   <none>           <none>
rook-ceph-mds-odf-storage-system-cephfilesystem-a-8dfd75d9t8h25   2/2     Running            0                 5h15m   10.128.2.23    compute-2   <none>           <none>
rook-ceph-mds-odf-storage-system-cephfilesystem-b-9ff44779bfvrn   2/2     Running            0                 5h15m   10.129.2.17    compute-0   <none>           <none>
rook-ceph-mgr-a-6574fc7875-4nk94                                  2/2     Running            0                 5h16m   10.128.2.20    compute-2   <none>           <none>
rook-ceph-mon-a-6fd898496-pb9ql                                   2/2     Running            0                 5h19m   10.129.2.14    compute-0   <none>           <none>
rook-ceph-mon-b-5bf678dcfb-gx252                                  2/2     Running            0                 5h19m   10.131.0.45    compute-1   <none>           <none>
rook-ceph-mon-c-c8bf6fdf8-lbdng                                   2/2     Running            0                 5h18m   10.128.2.22    compute-2   <none>           <none>
rook-ceph-operator-7699b484d9-tr6ng                               1/1     Running            0                 5h25m   10.129.2.10    compute-0   <none>           <none>
rook-ceph-osd-0-5f6c8956fb-24pfg                                  2/2     Running            0                 5h15m   10.131.0.49    compute-1   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-thin-0-data-0rrmfk--1-2z9td   0/1     Completed          0                 5h15m   10.131.0.48    compute-1   <none>           <none>
rook-ceph-rgw-odf-storage-system-cephobjectstore-a-57ddf47s84fz   1/2     CrashLoopBackOff   105 (3m46s ago)   5h14m   10.131.0.50    compute-1   <none>           <none>



All PVs are PVCs are not created.
$ oc get pvc -n openshift-storage
NAME                               STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
db-noobaa-db-pg-0                  Pending                                                                        odf-storage-system-ceph-rbd   5h18m
ocs-deviceset-thin-0-data-0rrmfk   Bound     pvc-02037e4f-02f7-452b-af63-1fe70215c03d   512Gi      RWO            thin                          5h18m
rook-ceph-mon-a                    Bound     pvc-679fe95e-4066-4db6-a766-69f5bdae299c   50Gi       RWO            thin                          5h22m
rook-ceph-mon-b                    Bound     pvc-45f7998c-b8ba-4bc1-82b7-763d00b7fb1d   50Gi       RWO            thin                          5h22m
rook-ceph-mon-c                    Bound     pvc-e7217495-f768-4737-bcfa-949e7e29066a   50Gi       RWO            thin                          5h22m


$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                STORAGECLASS   REASON   AGE
pvc-02037e4f-02f7-452b-af63-1fe70215c03d   512Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-thin-0-data-0rrmfk   thin                    5h18m
pvc-45f7998c-b8ba-4bc1-82b7-763d00b7fb1d   50Gi       RWO            Delete           Bound    openshift-storage/rook-ceph-mon-b                    thin                    5h23m
pvc-679fe95e-4066-4db6-a766-69f5bdae299c   50Gi       RWO            Delete           Bound    openshift-storage/rook-ceph-mon-a                    thin                    5h23m
pvc-e7217495-f768-4737-bcfa-949e7e29066a   50Gi       RWO            Delete           Bound    openshift-storage/rook-ceph-mon-c                    thin                    5h23m



Test steps:
1. Install ODF Operator
2. Go to Operators --> Installed Operators --> select Openshift Data Foundation
3. In the "Operator details" page, go to "Storage System" tab and click "Create StorageSystem" button.
4. Select the option "Use an existing storage class" and select "Full Deployment" under the Advanced option.
5. Continue with the rest of the steps and click "Create" button in the review and create page.
6. Wait for the storage cluster creation to complete.

Tested in version:
odf-operator.v4.9.0-102.ci
OCP 4.9.0-0.nightly-2021-08-18-144658


must-gather logs : http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bug-1994687_2/

Comment 8 Mudit Agarwal 2021-08-20 08:27:09 UTC
This should be a different issue, must-gather is not of much use here as the storage cluster was not created.

Please open a new bug with following outputs:
>> oc describe csv ocs-operator.v4.9.0-102.ci
>> rook-ceph operator logs
>> ocs-operator logs

Comment 9 Vijay Avuthu 2021-08-20 11:35:52 UTC
Update:
=========

Tested with ocs-registry:4.9.0-102.ci and didn't see errors with "Error ENOTSUP: Module 'mirroring' is not enabled " eventhough csv is in failed to move to succeded phase

> pods

$ oc get pods
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-79dnw                                            3/3     Running     0          5h47m
csi-cephfsplugin-d4wbd                                            3/3     Running     0          5h47m
csi-cephfsplugin-mb5ks                                            3/3     Running     0          5h47m
csi-cephfsplugin-provisioner-54fbb98c8f-b5v4l                     6/6     Running     0          5h47m
csi-cephfsplugin-provisioner-54fbb98c8f-pcvgq                     6/6     Running     0          5h47m
csi-rbdplugin-27sm6                                               3/3     Running     0          5h47m
csi-rbdplugin-94xn7                                               3/3     Running     0          5h47m
csi-rbdplugin-lm4qv                                               3/3     Running     0          5h47m
csi-rbdplugin-provisioner-84ccc64b48-5cfvw                        6/6     Running     0          5h47m
csi-rbdplugin-provisioner-84ccc64b48-nd8dl                        6/6     Running     0          5h47m
noobaa-core-0                                                     1/1     Running     0          5h44m
noobaa-db-pg-0                                                    1/1     Running     0          5h44m
noobaa-endpoint-54c66b6b88-cg5f6                                  1/1     Running     0          5h2m
noobaa-operator-68998c44dc-78pb6                                  1/1     Running     0          5h48m
ocs-metrics-exporter-7455f88587-fm6df                             1/1     Running     0          5h48m
ocs-operator-7d8bb7577d-4sffr                                     0/1     Running     0          5h48m
rook-ceph-crashcollector-compute-0-7bf548c9fc-5vpjj               1/1     Running     0          5h44m
rook-ceph-crashcollector-compute-1-5b55b94666-hqczc               1/1     Running     0          5h44m
rook-ceph-crashcollector-compute-2-58b844dbff-n86sw               1/1     Running     0          5h44m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-57b54b46vstmc   2/2     Running     0          5h43m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-7c8f8d55k67tb   2/2     Running     0          5h43m
rook-ceph-mgr-a-666787bf5-rc2xz                                   2/2     Running     0          5h45m
rook-ceph-mon-a-78f768bdb4-66sm9                                  2/2     Running     0          5h47m
rook-ceph-mon-b-8886f46f4-45htn                                   2/2     Running     0          5h46m
rook-ceph-mon-c-cb4695b4d-q6kzs                                   2/2     Running     0          5h45m
rook-ceph-operator-5c6c56b95-djt88                                1/1     Running     0          5h48m
rook-ceph-osd-0-6d4d98d9c4-nhqqn                                  2/2     Running     0          5h44m
rook-ceph-osd-1-547dd69cfb-87zg2                                  2/2     Running     0          5h44m
rook-ceph-osd-2-84df9467c-xkgd6                                   2/2     Running     0          5h44m
rook-ceph-osd-prepare-ocs-deviceset-0-data-0jl5s5--1-jjdzp        0/1     Completed   0          5h44m
rook-ceph-osd-prepare-ocs-deviceset-1-data-08bzbq--1-s88cj        0/1     Completed   0          5h44m
rook-ceph-osd-prepare-ocs-deviceset-2-data-0nwpsn--1-4pkrl        0/1     Completed   0          5h44m
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-75b567567r5r   2/2     Running     0          5h43m
rook-ceph-tools-cdd8d5c65-7vkg2                                   1/1     Running     0          5h41m

> $ oc logs rook-ceph-operator-5c6c56b95-djt88 | grep -i ENOTSUP
$ 

> Didn't see error msg from rook-ceph-operator-5c6c56b95-djt88

> csv status

$ oc get csv
NAME                         DISPLAY                       VERSION        REPLACES   PHASE
ocs-operator.v4.9.0-102.ci   OpenShift Container Storage   4.9.0-102.ci              Installing
$ 

raise bug https://bugzilla.redhat.com/show_bug.cgi?id=1996033 for above issue

Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5379/console

Hence moving the status to verified

Comment 15 errata-xmlrpc 2021-12-13 17:44:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:5086


Note You need to log in before you can comment on or make changes to this bug.