Description of problem: Can restore a filesystem PVC from the snapshot taken from the block PVC. And the restored PVC can be used by a Pod correctly. Once the regular user did a misoperation then the snapshot will be damaged. Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-03-08-184701 How reproducible: Always Steps to Reproduce: 1. Create a block PVC $ oc get pvc test-pvc-5 -ojson|jq .spec.volumeMode "Block" 2. Create a volumesnapshotclass $ oc get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE csi-snapshotclass ebs.csi.aws.com Delete 26m 3. Take snapshot for PVC $ oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE mysnapshot true test-pvc-5 3Gi csi-snapshotclass snapcontent-8eb3fc3d-9bb7-46bb-b284-548c0c3af668 19m 19m 4. Create a restore pvc from the snapshot(didn't define the volumeMode) $ cat restore-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restore-pvc spec: storageClassName: gp2-csi dataSource: name: mysnapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5. Create a Pod consuming the restore pvc Actual results: If define spec.containers.volumeMounts.mountPath, restored volume will be formated and the data on it lost. $ oc exec deployment-r-6f8888fc5b-r25gd -- df /mnt/storage Filesystem 1K-blocks Used Available Use% Mounted on /dev/nvme2n1 5095040 20472 5058184 1% /mnt/storage If define spec.containers.volumeDevices.devicePath, will report and data is safe: Warning FailedMount <invalid> (x2 over <invalid>) kubelet Unable to attach or mount volumes: unmounted volumes=[local], unattached volumes=[default-token-c65mn local]: volume local has volumeMode Filesystem, but is specified in volumeDevices Expected results: Seems to block restoring a filesystem PVC is safer. Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info:
I don't think it's a bug, if user restores snapshot of a block volume as a filesystem, it's their fault if the volume does not contain the filesystem the PV specifies (ext4 by default). However, there are some other concerns about this upstream (https://github.com/kubernetes-csi/external-snapshotter/issues/477) - we would like to deny regular users to restore block volume snapshots as filesystem, since it may have security consequences, and allow only trusted users / apps to do so. These users should be smart enough to ensure that content of the volume matches PV.spec.csi.fsType. This requires a new API field(s) and will span several Kubernetes releases. I'd prefer if we track it upstream instead of BZ.