Bug 2183155

Summary: failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Madhu Rajanna <mrajanna>
Component: ocs-operatorAssignee: Mudit Agarwal <muagarwa>
Status: CLOSED ERRATA QA Contact: Jilju Joy <jijoy>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.12CC: jijoy, nberry, ocs-bugs, odf-bz-bot
Target Milestone: ---   
Target Release: ODF 4.13.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2183687 (view as bug list) Environment:
Last Closed: 2023-06-21 15:25:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2183687    

Description Madhu Rajanna 2023-03-30 13:05:47 UTC
Description of problem (please be detailed as possible and provide log
snippets):

As we moved to consumer mode to the new ocs-client-operator we won't be creating the Rook resources anymore in the consumer cluster. The provider should send the subvolumegroup details back to the consumer so that the CSI driver can use it to mount the existing/migrated volumes or the new volumes/pvc

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy ODF 4.12 in provider mode
2. Deploy ocs-client-operator in consumer mode
3. create cephfs storageclassclaim and PVC and bind it to application pod.


Actual results:


Expected results:

PVC should go to bound state and the application should be in a running state.

Additional info:

Comment 7 Jilju Joy 2023-04-10 14:48:09 UTC
Verified in version:

ocs-client-operator.v4.13.0-130.stable             
odf-csi-addons-operator.v4.13.0-130.stable         

OCP 4.12.9
----------------------------------------------------
CephFS PVC was Bound. Attached the PVC to an app pod and the pod reached the state "Running".

$ oc get pvc pvc-cephfs1 
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
pvc-cephfs1   Bound    pvc-d37321bd-9a8e-4564-9dbe-c466500bb191   10Gi       RWO            ocs-storagecluster-cephfs   6m44s


$ oc get pod
NAME              READY   STATUS    RESTARTS   AGE
pod-pvc-cephfs1   1/1     Running   0          5m36s

$ oc get pod pod-pvc-cephfs1 -o yaml | grep claimName
      claimName: pvc-cephfs1

Created file in pod.
$ oc rsh pod-pvc-cephfs1 cat /var/lib/www/html/f1.txt
123


Testing was done on ODF to ODF on ROSA configuration without agent. Installation of ocs-client-operator and creation of the storageclient were manual process. The storageclassclaims were created manually.

Comment 10 errata-xmlrpc 2023-06-21 15:25:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742