must_gater is uploading
all logs are at : http://rhsqe-repo.lab.eng.blr.redhat.com/cns/ocs-qe-bugs/BZ-1806972/
The bug is that MON pods come up while they don't have a PV attached: [ebenahar@localhost ~]$ oc get pv -n openshift-storage NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0a470424-5de5-4bc6-881e-a858ebb07a20 340Gi RWO Delete Bound openshift-storage/ocs-deviceset-1-0-n5f7x gp2 7m35s pvc-250e9122-2061-4902-ab83-f98b1ad25811 340Gi RWO Delete Bound openshift-storage/ocs-deviceset-0-0-rvbvk gp2 7m35s pvc-74b55fcc-69c8-4b97-bbd8-da6beb433ca9 340Gi RWO Delete Bound openshift-storage/ocs-deviceset-2-0-r4ggr gp2 7m35s pvc-8d4dc2fc-d310-4649-80bb-0ee9080f1ff8 50Gi RWO Delete Bound openshift-storage/db-noobaa-db-0 ocs-storagecluster-ceph-rbd 6m41s [ebenahar@localhost ~]$ oc get pod -n openshift-storage --selector=app=rook-ceph-mon NAME READY STATUS RESTARTS AGE rook-ceph-mon-a-696695fcbd-wkphc 1/1 Running 0 9m35s rook-ceph-mon-b-59b4f4c49-9vgsn 1/1 Running 0 9m20s rook-ceph-mon-c-7446b9575f-4r2gg 1/1 Running 0 8m59s [ebenahar@localhost ~]$ oc get pod -n openshift-storage --selector=app=rook-ceph-osd NAME READY STATUS RESTARTS AGE rook-ceph-osd-0-856f4749f8-7l48l 1/1 Running 0 8m41s rook-ceph-osd-1-79c7df9f47-2fpls 1/1 Running 0 8m37s rook-ceph-osd-2-774cbdcd5-l7nbp 1/1 Running 0 8m31s
How did this pass CI acceptance tests?
This is likely a change in behavior that has led to regression. Needs to be fixed in ocs-operator.
Ugh... for some reason the must-gather does not have the StorageCluster YAML in it. Can someone repro and get the output for `oc get storageclusters -o yaml`?
https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/OCS%20Build%20Pipeline%204.3/63/ has this fix.
correcting status to MODIFIED since build has deployment issues
(In reply to Michael Adam from comment #15) > correcting status to MODIFIED since build has deployment issues This is stuck in MODIFIED for almost a week?
https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/OCS%20Build%20Pipeline%204.3/82/ has the fix and is consumed by QE
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1437
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days