Bug 2164617
Summary: | Unable to expand ocs-storagecluster-ceph-rbd PVCs provisioned in Filesystem mode | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | bmcmurra |
Component: | csi-driver | Assignee: | Nobody <nobody> |
Status: | CLOSED ERRATA | QA Contact: | Yuli Persky <ypersky> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.10 | CC: | hekumar, kbg, khover, muagarwa, nberry, ocs-bugs, odf-bz-bot, tdesala |
Target Milestone: | --- | ||
Target Release: | ODF 4.13.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
.RBD Filesystem PVC expands even when the StagingTargetPath is missing
Previously, the RBD Filesystem PVC expansion was not successful when the `StagingTartgetPath` was missing in the NodeExpandVolume RPC call and Ceph CSI was not able to get the device details to expand.
With this fix, Ceph CSI goes through all the mount references to identify the `StageingTargetPath` where the RBD image is mounted. As a result, RBD Filesystem PVC expands successfully even when the StagingTargetPath is missing.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2023-06-21 15:23:08 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2154341 |
Description
bmcmurra
2023-01-25 20:16:52 UTC
Thanks Hemant, That makes sense now regarding the behavior on my test cluster. It seemed strange at first, as I was able to expand PVC on cephfs sc and rbd block without issue with the same " test-X " PVCs unattached to any pod workload. > * Is this fix intended to be backported? > >> Not sure about it as this case exists on all ODF versions and we have a workaround. If there is an ask for it, this needs to be decided by the program. imo, this does not qualify/satisfy backport request, so, its very unlikely to be considered. I tried the scenario described in Comment#22 ( expanded the pvc number of times, chen changed pod count on the deployment from 0 to 1 also number of times), verified that the expansion is successful. However, I did not manage to receve a state in which the staging_target_path is missing in the NodeExpandVolume RPC call. From the logs : [ypersky@ypersky ocs-ci]$ oc logs csi-rbdplugin-n4xl8 -c csi-rbdplugin | grep NodeExpandVolume I0418 05:34:46.538397 15996 utils.go:195] ID: 22 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC call: /csi.v1.Node/NodeExpandVolume I0418 08:27:47.455884 15996 utils.go:195] ID: 53 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC call: /csi.v1.Node/NodeExpandVolume I0418 08:27:47.629071 15996 utils.go:195] ID: 56 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC call: /csi.v1.Node/NodeExpandVolume I0418 09:27:27.334022 15996 utils.go:195] ID: 170 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC call: /csi.v1.Node/NodeExpandVolume I0418 09:27:27.529704 15996 utils.go:195] ID: 173 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC call: /csi.v1.Node/NodeExpandVolume [ypersky@ypersky ocs-ci]$ oc logs csi-rbdplugin-n4xl8 -c csi-rbdplugin | grep "ID: 173 " I0418 09:27:27.529704 15996 utils.go:195] ID: 173 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC call: /csi.v1.Node/NodeExpandVolume I0418 09:27:27.529896 15996 utils.go:206] ID: 173 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC request: {"capacity_range":{"required_bytes":12884901888},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/openshift-storage.rbd.csi.ceph.com/d299fc5020760f39dbd52b7b0bf3d33f93036abba4c82e664b28257c5c71ab52/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_id":"0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e","volume_path":"/var/lib/kubelet/pods/293d17fb-5196-41de-a694-8a2034959b20/volumes/kubernetes.io~csi/pvc-639692f7-45ec-4e83-99bc-c3a5bf66c461/mount"} I0418 09:27:27.592903 15996 cephcmds.go:105] ID: 173 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e command succeeded: rbd [device list --format=json --device-type krbd] I0418 09:27:27.614409 15996 utils.go:212] ID: 173 Req-ID: 0001-0011-openshift-storage-0000000000000001-8a2bf746-5abe-4120-95f4-7ad0de0a855e GRPC response: {} Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742 |