Bug 2305660 - failed to provision voumes with StorageClass "ocs-storagecluster-ceph-rbd", error fetching configuration for cluster ID "openshift-storage": open /etc/ceph-csi-config/config.json
Summary: failed to provision voumes with StorageClass "ocs-storagecluster-ceph-rbd", ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-client-operator
Version: 4.17
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.17.0
Assignee: Leela Venkaiah Gangavarapu
QA Contact: Jilju Joy
URL:
Whiteboard: isf-provider
Depends On: 2304235
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-08-19 05:47 UTC by Vijay Avuthu
Modified: 2024-10-30 14:31 UTC (History)
6 users (show)

Fixed In Version: 4.17.0-79
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-10-30 14:31:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OCSBZM-8827 0 None None None 2024-08-22 03:19:47 UTC
Red Hat Product Errata RHSA-2024:8676 0 None None None 2024-10-30 14:31:33 UTC

Description Vijay Avuthu 2024-08-19 05:47:56 UTC
Description of problem (please be detailed as possible and provide log
snippests):

pods which used "rbd" SC are failing


Version of all relevant components (if applicable):
ocs-registry:4.17.0-77


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
2/2

Can this issue reproduce from the UI?
Not tried

If this is a regression, please provide more details to justify this:
yes

Steps to Reproduce:
1. install ODF using ocs-ci
2. check all pods are in running state or not
3.


Actual results:
$ oc get pods | egrep -v "Running|Completed"
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-rbdplugin-42pmv                                               0/4     Pending     0          51m
csi-rbdplugin-kbfp9                                               0/4     Pending     0          51m
csi-rbdplugin-lqb5q                                               0/4     Pending     0          51m
demo-pod2                                                         0/1     Pending     0          9m51s
noobaa-db-pg-0                                                    0/1     Pending     0          48m



Expected results:
All pods should be running

Additional info:

 oc get pod noobaa-db-pg-0 -o yaml
apiVersion: v1
kind: Pod
metadata:

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-08-19T04:54:35Z"
    message: '0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims.
      preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: BestEffort

> $ oc describe pvc db-noobaa-db-pg-0
Name:          db-noobaa-db-pg-0
Namespace:     openshift-storage
StorageClass:  ocs-storagecluster-ceph-rbd
Status:        Pending
Volume:        
Labels:        app=noobaa
               noobaa-db=postgres
Annotations:   volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       noobaa-db-pg-0
Events:
  Type     Reason                Age                    From                                                                                                                                  Message
  ----     ------                ----                   ----                                                                                                                                  -------
  Warning  ProvisioningFailed    31m (x14 over 50m)     openshift-storage.rbd.csi.ceph.com_openshift-storage.rbd.csi.ceph.com-ctrlplugin-857f4768-7lmcb_d106146e-0f9e-4542-ac51-6493a3b8b0fd  failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": rpc error: code = InvalidArgument desc = failed to fetch monitor list using clusterID (openshift-storage): error fetching configuration for cluster ID "openshift-storage": open /etc/ceph-csi-config/config.json: no such file or directory
  Normal   ExternalProvisioning  4m48s (x186 over 50m)  persistentvolume-controller                                                                                                           Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
  Normal   Provisioning          91s (x22 over 50m)     openshift-storage.rbd.csi.ceph.com_openshift-storage.rbd.csi.ceph.com-ctrlplugin-857f4768-7lmcb_d106146e-0f9e-4542-ac51-6493a3b8b0fd  External provisioner is provisioning volume for claim "openshift-storage/db-noobaa-db-pg-0"

job: https://url.corp.redhat.com/7f1ee14
must gather: https://url.corp.redhat.com/c384b53

Comment 10 Sunil Kumar Acharya 2024-08-26 11:12:44 UTC
Please update the RDT flag/text appropriately.

Comment 14 errata-xmlrpc 2024-10-30 14:31:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:8676


Note You need to log in before you can comment on or make changes to this bug.