Bug 1994687
| Summary: | [vSphere]: csv ocs-registry:4.9.0-91.ci is in Installing phase | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Vijay Avuthu <vavuthu> |
| Component: | ocs-operator | Assignee: | Jose A. Rivera <jarrpa> |
| Status: | CLOSED ERRATA | QA Contact: | Raz Tamir <ratamir> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | jijoy, kramdoss, madam, muagarwa, ocs-bugs, odf-bz-bot, sostapov, tnielsen |
| Target Milestone: | --- | Keywords: | Automation, Regression |
| Target Release: | ODF 4.9.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | v4.9.0-102.ci | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-12-13 17:44:58 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Vijay Avuthu
2021-08-17 17:18:12 UTC
The rook release-4.9 branch was sync'd a couple days ago, although I'm not sure which build exactly would have the changes. Can you try on the latest build? Tested installation from UI and found similar issue. Latest build is used for testing on VMware. $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.9.0-102.ci OpenShift Container Storage 4.9.0-102.ci Installing odf-operator.v4.9.0-102.ci OpenShift Data Foundation 4.9.0-102.ci Succeeded $ oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION odf-storage-system 5h40m Progressing 2021-08-19T12:28:46Z 4.9.0 All pods are not created: $ oc get pods -o wide -n openshift-storage NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES csi-cephfsplugin-52hqs 3/3 Running 0 5h20m 10.1.160.201 compute-1 <none> <none> csi-cephfsplugin-lp78n 3/3 Running 0 5h20m 10.1.161.101 compute-2 <none> <none> csi-cephfsplugin-mrrrf 3/3 Running 0 5h20m 10.1.161.104 compute-0 <none> <none> csi-cephfsplugin-provisioner-54fbb98c8f-clz4f 6/6 Running 0 5h20m 10.128.2.17 compute-2 <none> <none> csi-cephfsplugin-provisioner-54fbb98c8f-p98rv 6/6 Running 0 5h20m 10.131.0.42 compute-1 <none> <none> csi-rbdplugin-4tdxv 3/3 Running 0 5h20m 10.1.161.104 compute-0 <none> <none> csi-rbdplugin-8kr6f 3/3 Running 0 5h20m 10.1.161.101 compute-2 <none> <none> csi-rbdplugin-hlj9t 3/3 Running 0 5h20m 10.1.160.201 compute-1 <none> <none> csi-rbdplugin-provisioner-84ccc64b48-22h82 6/6 Running 0 5h20m 10.131.0.41 compute-1 <none> <none> csi-rbdplugin-provisioner-84ccc64b48-k4rz6 6/6 Running 0 5h20m 10.129.2.12 compute-0 <none> <none> noobaa-core-0 1/1 Running 0 5h15m 10.129.2.16 compute-0 <none> <none> noobaa-db-pg-0 0/1 Pending 0 5h15m <none> <none> <none> <none> noobaa-operator-66c6f88745-x7wb5 1/1 Running 0 5h25m 10.128.2.14 compute-2 <none> <none> ocs-metrics-exporter-79f8949777-m6t4b 1/1 Running 0 5h25m 10.128.2.15 compute-2 <none> <none> ocs-operator-546fd6c668-6bwtg 0/1 Running 0 5h25m 10.129.2.9 compute-0 <none> <none> odf-console-744c58ccd7-x2mps 2/2 Running 0 5h25m 10.129.2.11 compute-0 <none> <none> odf-operator-controller-manager-8ff7c7b5c-4dm9h 2/2 Running 0 5h25m 10.128.2.13 compute-2 <none> <none> rook-ceph-crashcollector-compute-0-7bf548c9fc-6blbs 1/1 Running 0 5h16m 10.129.2.15 compute-0 <none> <none> rook-ceph-crashcollector-compute-1-5b55b94666-6gjc2 1/1 Running 0 5h15m 10.131.0.46 compute-1 <none> <none> rook-ceph-crashcollector-compute-2-58b844dbff-pnkh7 1/1 Running 0 5h15m 10.128.2.24 compute-2 <none> <none> rook-ceph-mds-odf-storage-system-cephfilesystem-a-8dfd75d9t8h25 2/2 Running 0 5h15m 10.128.2.23 compute-2 <none> <none> rook-ceph-mds-odf-storage-system-cephfilesystem-b-9ff44779bfvrn 2/2 Running 0 5h15m 10.129.2.17 compute-0 <none> <none> rook-ceph-mgr-a-6574fc7875-4nk94 2/2 Running 0 5h16m 10.128.2.20 compute-2 <none> <none> rook-ceph-mon-a-6fd898496-pb9ql 2/2 Running 0 5h19m 10.129.2.14 compute-0 <none> <none> rook-ceph-mon-b-5bf678dcfb-gx252 2/2 Running 0 5h19m 10.131.0.45 compute-1 <none> <none> rook-ceph-mon-c-c8bf6fdf8-lbdng 2/2 Running 0 5h18m 10.128.2.22 compute-2 <none> <none> rook-ceph-operator-7699b484d9-tr6ng 1/1 Running 0 5h25m 10.129.2.10 compute-0 <none> <none> rook-ceph-osd-0-5f6c8956fb-24pfg 2/2 Running 0 5h15m 10.131.0.49 compute-1 <none> <none> rook-ceph-osd-prepare-ocs-deviceset-thin-0-data-0rrmfk--1-2z9td 0/1 Completed 0 5h15m 10.131.0.48 compute-1 <none> <none> rook-ceph-rgw-odf-storage-system-cephobjectstore-a-57ddf47s84fz 1/2 CrashLoopBackOff 105 (3m46s ago) 5h14m 10.131.0.50 compute-1 <none> <none> All PVs are PVCs are not created. $ oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-pg-0 Pending odf-storage-system-ceph-rbd 5h18m ocs-deviceset-thin-0-data-0rrmfk Bound pvc-02037e4f-02f7-452b-af63-1fe70215c03d 512Gi RWO thin 5h18m rook-ceph-mon-a Bound pvc-679fe95e-4066-4db6-a766-69f5bdae299c 50Gi RWO thin 5h22m rook-ceph-mon-b Bound pvc-45f7998c-b8ba-4bc1-82b7-763d00b7fb1d 50Gi RWO thin 5h22m rook-ceph-mon-c Bound pvc-e7217495-f768-4737-bcfa-949e7e29066a 50Gi RWO thin 5h22m $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-02037e4f-02f7-452b-af63-1fe70215c03d 512Gi RWO Delete Bound openshift-storage/ocs-deviceset-thin-0-data-0rrmfk thin 5h18m pvc-45f7998c-b8ba-4bc1-82b7-763d00b7fb1d 50Gi RWO Delete Bound openshift-storage/rook-ceph-mon-b thin 5h23m pvc-679fe95e-4066-4db6-a766-69f5bdae299c 50Gi RWO Delete Bound openshift-storage/rook-ceph-mon-a thin 5h23m pvc-e7217495-f768-4737-bcfa-949e7e29066a 50Gi RWO Delete Bound openshift-storage/rook-ceph-mon-c thin 5h23m Test steps: 1. Install ODF Operator 2. Go to Operators --> Installed Operators --> select Openshift Data Foundation 3. In the "Operator details" page, go to "Storage System" tab and click "Create StorageSystem" button. 4. Select the option "Use an existing storage class" and select "Full Deployment" under the Advanced option. 5. Continue with the rest of the steps and click "Create" button in the review and create page. 6. Wait for the storage cluster creation to complete. Tested in version: odf-operator.v4.9.0-102.ci OCP 4.9.0-0.nightly-2021-08-18-144658 must-gather logs : http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bug-1994687_2/ This should be a different issue, must-gather is not of much use here as the storage cluster was not created.
Please open a new bug with following outputs:
>> oc describe csv ocs-operator.v4.9.0-102.ci
>> rook-ceph operator logs
>> ocs-operator logs
Update: ========= Tested with ocs-registry:4.9.0-102.ci and didn't see errors with "Error ENOTSUP: Module 'mirroring' is not enabled " eventhough csv is in failed to move to succeded phase > pods $ oc get pods NAME READY STATUS RESTARTS AGE csi-cephfsplugin-79dnw 3/3 Running 0 5h47m csi-cephfsplugin-d4wbd 3/3 Running 0 5h47m csi-cephfsplugin-mb5ks 3/3 Running 0 5h47m csi-cephfsplugin-provisioner-54fbb98c8f-b5v4l 6/6 Running 0 5h47m csi-cephfsplugin-provisioner-54fbb98c8f-pcvgq 6/6 Running 0 5h47m csi-rbdplugin-27sm6 3/3 Running 0 5h47m csi-rbdplugin-94xn7 3/3 Running 0 5h47m csi-rbdplugin-lm4qv 3/3 Running 0 5h47m csi-rbdplugin-provisioner-84ccc64b48-5cfvw 6/6 Running 0 5h47m csi-rbdplugin-provisioner-84ccc64b48-nd8dl 6/6 Running 0 5h47m noobaa-core-0 1/1 Running 0 5h44m noobaa-db-pg-0 1/1 Running 0 5h44m noobaa-endpoint-54c66b6b88-cg5f6 1/1 Running 0 5h2m noobaa-operator-68998c44dc-78pb6 1/1 Running 0 5h48m ocs-metrics-exporter-7455f88587-fm6df 1/1 Running 0 5h48m ocs-operator-7d8bb7577d-4sffr 0/1 Running 0 5h48m rook-ceph-crashcollector-compute-0-7bf548c9fc-5vpjj 1/1 Running 0 5h44m rook-ceph-crashcollector-compute-1-5b55b94666-hqczc 1/1 Running 0 5h44m rook-ceph-crashcollector-compute-2-58b844dbff-n86sw 1/1 Running 0 5h44m rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-57b54b46vstmc 2/2 Running 0 5h43m rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-7c8f8d55k67tb 2/2 Running 0 5h43m rook-ceph-mgr-a-666787bf5-rc2xz 2/2 Running 0 5h45m rook-ceph-mon-a-78f768bdb4-66sm9 2/2 Running 0 5h47m rook-ceph-mon-b-8886f46f4-45htn 2/2 Running 0 5h46m rook-ceph-mon-c-cb4695b4d-q6kzs 2/2 Running 0 5h45m rook-ceph-operator-5c6c56b95-djt88 1/1 Running 0 5h48m rook-ceph-osd-0-6d4d98d9c4-nhqqn 2/2 Running 0 5h44m rook-ceph-osd-1-547dd69cfb-87zg2 2/2 Running 0 5h44m rook-ceph-osd-2-84df9467c-xkgd6 2/2 Running 0 5h44m rook-ceph-osd-prepare-ocs-deviceset-0-data-0jl5s5--1-jjdzp 0/1 Completed 0 5h44m rook-ceph-osd-prepare-ocs-deviceset-1-data-08bzbq--1-s88cj 0/1 Completed 0 5h44m rook-ceph-osd-prepare-ocs-deviceset-2-data-0nwpsn--1-4pkrl 0/1 Completed 0 5h44m rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-75b567567r5r 2/2 Running 0 5h43m rook-ceph-tools-cdd8d5c65-7vkg2 1/1 Running 0 5h41m > $ oc logs rook-ceph-operator-5c6c56b95-djt88 | grep -i ENOTSUP $ > Didn't see error msg from rook-ceph-operator-5c6c56b95-djt88 > csv status $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.9.0-102.ci OpenShift Container Storage 4.9.0-102.ci Installing $ raise bug https://bugzilla.redhat.com/show_bug.cgi?id=1996033 for above issue Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5379/console Hence moving the status to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:5086 |