Bug 1806972

Summary: Mon pods are not backed by PVCs
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Avi Liani <alayani>
Component: ocs-operatorAssignee: Jose A. Rivera <jarrpa>
Status: CLOSED ERRATA QA Contact: Pratik Surve <prsurve>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.3CC: ebenahar, gmeno, madam, mzalewsk, ocs-bugs, sostapov
Target Milestone: ---Keywords: Automation, AutomationBlocker, Regression, TestBlocker
Target Release: OCS 4.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-14 09:45:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 2 Avi Liani 2020-02-25 11:47:21 UTC
must_gater is uploading

Comment 3 Avi Liani 2020-02-25 12:21:51 UTC
all logs are at : http://rhsqe-repo.lab.eng.blr.redhat.com/cns/ocs-qe-bugs/BZ-1806972/

Comment 4 Elad 2020-02-25 13:25:01 UTC
The bug is that MON pods come up while they don't have a PV attached:


[ebenahar@localhost ~]$ oc get pv -n openshift-storage
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS                  REASON   AGE
pvc-0a470424-5de5-4bc6-881e-a858ebb07a20   340Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-1-0-n5f7x   gp2                                    7m35s
pvc-250e9122-2061-4902-ab83-f98b1ad25811   340Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-0-0-rvbvk   gp2                                    7m35s
pvc-74b55fcc-69c8-4b97-bbd8-da6beb433ca9   340Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-2-0-r4ggr   gp2                                    7m35s
pvc-8d4dc2fc-d310-4649-80bb-0ee9080f1ff8   50Gi       RWO            Delete           Bound    openshift-storage/db-noobaa-db-0            ocs-storagecluster-ceph-rbd            6m41s

[ebenahar@localhost ~]$ oc get pod -n openshift-storage --selector=app=rook-ceph-mon
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-mon-a-696695fcbd-wkphc   1/1     Running   0          9m35s
rook-ceph-mon-b-59b4f4c49-9vgsn    1/1     Running   0          9m20s
rook-ceph-mon-c-7446b9575f-4r2gg   1/1     Running   0          8m59s

[ebenahar@localhost ~]$ oc get pod -n openshift-storage --selector=app=rook-ceph-osd
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-osd-0-856f4749f8-7l48l   1/1     Running   0          8m41s
rook-ceph-osd-1-79c7df9f47-2fpls   1/1     Running   0          8m37s
rook-ceph-osd-2-774cbdcd5-l7nbp    1/1     Running   0          8m31s

Comment 6 Yaniv Kaul 2020-02-25 13:31:26 UTC
How did this pass CI acceptance tests?

Comment 8 Jose A. Rivera 2020-02-25 15:21:56 UTC
This is likely a change in behavior that has led to regression. Needs to be fixed in ocs-operator.

Comment 9 Jose A. Rivera 2020-02-25 15:46:25 UTC
Ugh... for some reason the must-gather does not have the StorageCluster YAML in it. Can someone repro and get the output for `oc get storageclusters -o yaml`?

Comment 15 Michael Adam 2020-02-26 15:13:07 UTC
correcting status to MODIFIED since build has deployment issues

Comment 16 Yaniv Kaul 2020-03-03 07:26:54 UTC
(In reply to Michael Adam from comment #15)
> correcting status to MODIFIED since build has deployment issues

This is stuck in MODIFIED for almost a week?

Comment 17 Michael Adam 2020-03-04 08:23:40 UTC
https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/OCS%20Build%20Pipeline%204.3/82/

has the fix and is consumed by QE

Comment 21 errata-xmlrpc 2020-04-14 09:45:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1437

Comment 22 Red Hat Bugzilla 2023-09-14 05:53:20 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days