It is not possible to restore a snapshot of a XFS volume and use it on the same node as the original volume. Steps to reproduce: 1. Create a StorageClass with csi.storage.k8s.io/fstype: xfs. 2. Create PVC A + Pod A using it, store some data on the provisioned volume. 3. Stop the Pod A 4. Take a snapshot of the PVC A, restore it into a new volume as PVC B. 5. Run both Pod A (with PVC A) and Pod B (with PVC B) on the same node. Actual result: one of the pods can't be started: MountVolume.MountDevice failed for volume "pvc-25a7129e-2cd7-4170-9050-550057ed0e20" : rpc error: code = Internal desc = mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o shared,defaults /dev/vde /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-25a7129e-2cd7-4170-9050-550057ed0e20/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-25a7129e-2cd7-4170-9050-550057ed0e20/globalmount: wrong fs type, bad option, bad superblock on /dev/vde, missing codepage or helper program, or other error. Kernel says: [17567.886004] XFS (vde): Filesystem has duplicate UUID b5820d11-d42d-4fc1-8e4a-ae431b33039d - can't mount Expected result: both pods can run.
This has been fixed upstream in https://github.com/kubernetes-sigs/alibaba-cloud-csi-driver/pull/570. We need to backport the PR.
This needs to be fixed before declaring the CSI driver GA in 4.10.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056