Description of problem (please be detailed as possible and provide log snippests): pods which used "rbd" SC are failing Version of all relevant components (if applicable): ocs-registry:4.17.0-77 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? 2/2 Can this issue reproduce from the UI? Not tried If this is a regression, please provide more details to justify this: yes Steps to Reproduce: 1. install ODF using ocs-ci 2. check all pods are in running state or not 3. Actual results: $ oc get pods | egrep -v "Running|Completed" NAME READY STATUS RESTARTS AGE csi-rbdplugin-42pmv 0/4 Pending 0 51m csi-rbdplugin-kbfp9 0/4 Pending 0 51m csi-rbdplugin-lqb5q 0/4 Pending 0 51m demo-pod2 0/1 Pending 0 9m51s noobaa-db-pg-0 0/1 Pending 0 48m Expected results: All pods should be running Additional info: oc get pod noobaa-db-pg-0 -o yaml apiVersion: v1 kind: Pod metadata: status: conditions: - lastProbeTime: null lastTransitionTime: "2024-08-19T04:54:35Z" message: '0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: BestEffort > $ oc describe pvc db-noobaa-db-pg-0 Name: db-noobaa-db-pg-0 Namespace: openshift-storage StorageClass: ocs-storagecluster-ceph-rbd Status: Pending Volume: Labels: app=noobaa noobaa-db=postgres Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: noobaa-db-pg-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 31m (x14 over 50m) openshift-storage.rbd.csi.ceph.com_openshift-storage.rbd.csi.ceph.com-ctrlplugin-857f4768-7lmcb_d106146e-0f9e-4542-ac51-6493a3b8b0fd failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": rpc error: code = InvalidArgument desc = failed to fetch monitor list using clusterID (openshift-storage): error fetching configuration for cluster ID "openshift-storage": open /etc/ceph-csi-config/config.json: no such file or directory Normal ExternalProvisioning 4m48s (x186 over 50m) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. Normal Provisioning 91s (x22 over 50m) openshift-storage.rbd.csi.ceph.com_openshift-storage.rbd.csi.ceph.com-ctrlplugin-857f4768-7lmcb_d106146e-0f9e-4542-ac51-6493a3b8b0fd External provisioner is provisioning volume for claim "openshift-storage/db-noobaa-db-pg-0" job: https://url.corp.redhat.com/7f1ee14 must gather: https://url.corp.redhat.com/c384b53
Please update the RDT flag/text appropriately.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:8676