Bug 1937202 - Can restore a filesystem PVC from the snapshot taken from the block PVC
Summary: Can restore a filesystem PVC from the snapshot taken from the block PVC
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: aos-storage-staff@redhat.com
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-10 06:44 UTC by Qin Ping
Modified: 2021-03-12 16:33 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-12 16:33:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qin Ping 2021-03-10 06:44:10 UTC
Description of problem:
Can restore a filesystem PVC from the snapshot taken from the block PVC. And the restored PVC can be used by a Pod correctly. Once the regular user did a misoperation then the snapshot will be damaged.

Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-03-08-184701

How reproducible:
Always

Steps to Reproduce:
1. Create a block PVC
$ oc get pvc test-pvc-5 -ojson|jq .spec.volumeMode
"Block"
2. Create a volumesnapshotclass
$ oc get volumesnapshotclass
NAME                DRIVER            DELETIONPOLICY   AGE
csi-snapshotclass   ebs.csi.aws.com   Delete           26m
3. Take snapshot for PVC
$ oc get volumesnapshot
NAME         READYTOUSE   SOURCEPVC    SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS       SNAPSHOTCONTENT                                    CREATIONTIME   AGE
mysnapshot   true         test-pvc-5                           3Gi           csi-snapshotclass   snapcontent-8eb3fc3d-9bb7-46bb-b284-548c0c3af668   19m            19m
4. Create a restore pvc from the snapshot(didn't define the volumeMode)
$ cat restore-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: restore-pvc
spec:
  storageClassName: gp2-csi
  dataSource:
    name: mysnapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
5. Create a Pod consuming the restore pvc


Actual results:
If define spec.containers.volumeMounts.mountPath, restored volume will be formated and the data on it lost.
$ oc exec deployment-r-6f8888fc5b-r25gd -- df /mnt/storage
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/nvme2n1     5095040 20472   5058184   1% /mnt/storage

If define spec.containers.volumeDevices.devicePath, will report and data is safe:
  Warning  FailedMount             <invalid> (x2 over <invalid>)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[local], unattached volumes=[default-token-c65mn local]: volume local has volumeMode Filesystem, but is specified in volumeDevices


Expected results:
Seems to block restoring a filesystem PVC is safer.

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Jan Safranek 2021-03-12 16:33:00 UTC
I don't think it's a bug, if user restores snapshot of a block volume as a filesystem, it's their fault if the volume does not contain the filesystem the PV specifies (ext4 by default).

However, there are some other concerns about this upstream (https://github.com/kubernetes-csi/external-snapshotter/issues/477) - we would like to deny regular users to restore block volume snapshots as filesystem, since it may have security consequences, and allow only trusted users / apps to do so. These users should be smart enough to ensure that content of the volume matches PV.spec.csi.fsType. This requires a new API field(s) and will span several Kubernetes releases. I'd prefer if we track it upstream instead of BZ.


Note You need to log in before you can comment on or make changes to this bug.