Bug 2183155 - failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call
Summary: failed to mount the the cephfs subvolume as subvolumegroup name is not sent i...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.13.0
Assignee: Mudit Agarwal
QA Contact: Jilju Joy
URL:
Whiteboard:
Depends On:
Blocks: 2183687
TreeView+ depends on / blocked
 
Reported: 2023-03-30 13:05 UTC by Madhu Rajanna
Modified: 2023-08-09 17:00 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2183687 (view as bug list)
Environment:
Last Closed: 2023-06-21 15:25:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1976 0 None open ms: add subvolumegroupname to GetStorageClassClaimConfig RPC 2023-03-30 13:10:53 UTC
Github red-hat-storage ocs-operator pull 1977 0 None open Bug 2183155:[release-4.13] ms: add subvolumegroupname to GetStorageClassClaimConfig RPC 2023-04-01 01:01:01 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:25:29 UTC

Description Madhu Rajanna 2023-03-30 13:05:47 UTC
Description of problem (please be detailed as possible and provide log
snippets):

As we moved to consumer mode to the new ocs-client-operator we won't be creating the Rook resources anymore in the consumer cluster. The provider should send the subvolumegroup details back to the consumer so that the CSI driver can use it to mount the existing/migrated volumes or the new volumes/pvc

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy ODF 4.12 in provider mode
2. Deploy ocs-client-operator in consumer mode
3. create cephfs storageclassclaim and PVC and bind it to application pod.


Actual results:


Expected results:

PVC should go to bound state and the application should be in a running state.

Additional info:

Comment 7 Jilju Joy 2023-04-10 14:48:09 UTC
Verified in version:

ocs-client-operator.v4.13.0-130.stable             
odf-csi-addons-operator.v4.13.0-130.stable         

OCP 4.12.9
----------------------------------------------------
CephFS PVC was Bound. Attached the PVC to an app pod and the pod reached the state "Running".

$ oc get pvc pvc-cephfs1 
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
pvc-cephfs1   Bound    pvc-d37321bd-9a8e-4564-9dbe-c466500bb191   10Gi       RWO            ocs-storagecluster-cephfs   6m44s


$ oc get pod
NAME              READY   STATUS    RESTARTS   AGE
pod-pvc-cephfs1   1/1     Running   0          5m36s

$ oc get pod pod-pvc-cephfs1 -o yaml | grep claimName
      claimName: pvc-cephfs1

Created file in pod.
$ oc rsh pod-pvc-cephfs1 cat /var/lib/www/html/f1.txt
123


Testing was done on ODF to ODF on ROSA configuration without agent. Installation of ocs-client-operator and creation of the storageclient were manual process. The storageclassclaims were created manually.

Comment 10 errata-xmlrpc 2023-06-21 15:25:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.