Bug 2155402
| Summary: | Pod and PVC for replica-1 pool in pending state | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | narayanspg <ngowda> | ||||
| Component: | ocs-operator | Assignee: | Malay Kumar parida <mparida> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Martin Bukatovic <mbukatov> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.12 | CC: | hnallurv, mparida, mrajanna, muagarwa, nberry, ocs-bugs, odf-bz-bot, rar, sostapov | ||||
| Target Milestone: | --- | ||||||
| Target Release: | ODF 4.12.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | 4.12.0-156 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2023-02-08 14:06:28 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
narayanspg
2022-12-21 06:27:08 UTC
This is the log message we see in the rbd provisoner pod
W1221 07:09:36.664740 1 controller.go:934] Retrying syncing claim "cabf9c36-0b32-43b9-9f9b-ad680bccbe3a", failure 705
E1221 07:09:36.664782 1 controller.go:957] error syncing claim "cabf9c36-0b32-43b9-9f9b-ad680bccbe3a": failed to provision volume with StorageClass "ocs-storagecluster-ceph-non-resilient-rbd": rpc error: code = Internal desc = none of the topology constrained pools matched requested topology constraints : pools ([{PoolName:ocs-storagecluster-cephblockpool-worker-0 DataPoolName: DomainSegments:[{DomainLabel:host DomainValue:worker-0}]} {PoolName:ocs-storagecluster-cephblockpool-worker-1 DataPoolName: DomainSegments:[{DomainLabel:host DomainValue:worker-1}]} {PoolName:ocs-storagecluster-cephblockpool-worker-2 DataPoolName: DomainSegments:[{DomainLabel:host DomainValue:worker-2}]}]) requested topology ({Requisite:[segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-0" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-1" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-2" > ] Preferred:[segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-0" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-1" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-2" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0})
I1221 07:09:36.664811 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"openshift-storage", Name:"non-resilient-rbd-pvc", UID:"cabf9c36-0b32-43b9-9f9b-ad680bccbe3a", APIVersion:"v1", ResourceVersion:"3503148", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "ocs-storagecluster-ceph-non-resilient-rbd": rpc error: code = Internal desc = none of the topology constrained pools matched requested topology constraints : pools ([{PoolName:ocs-storagecluster-cephblockpool-worker-0 DataPoolName: DomainSegments:[{DomainLabel:host DomainValue:worker-0}]} {PoolName:ocs-storagecluster-cephblockpool-worker-1 DataPoolName: DomainSegments:[{DomainLabel:host DomainValue:worker-1}]} {PoolName:ocs-storagecluster-cephblockpool-worker-2 DataPoolName: DomainSegments:[{DomainLabel:host DomainValue:worker-2}]}]) requested topology ({Requisite:[segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-0" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-1" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-2" > ] Preferred:[segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-0" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-1" > segments:<key:"topology.openshift-storage.rbd.csi.ceph.com/hostname" value:"worker-2" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0})
Please attach OCS-mustgather for checking what is wrong. as per the latest update from Malay is working on the fix for this issue. The issue is resolved on the latest build tested. we can close this issue. Thank you. I am not getting the option to move to a Verified state. only two options are available (ON_QA and CLOSED). Please mark as Verified. (In reply to narayanspg from comment #16) > I am not getting the option to move to a Verified state. only two options > are available (ON_QA and CLOSED). > Please mark as Verified. Thanks! Moving the BZ to the verified state |