Description of problem: When creating deployment with PVC using cinder csi driver, the mounted volume have no write access, looks like the FsGroup is not correctly setted. $ oc rsh mydeploy03-55f486b9f-cspkj sh-4.4$ id uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000 sh-4.4$ ls -l /mnt total 4 drwxr-xr-x. 3 root root 4096 Dec 1 07:15 local sh-4.4$ ls -ldZ /mnt/local drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0:c20,c25 4096 Dec 1 07:15 /mnt/local sh-4.4$ echo "Hello csi cinder" > /mnt/local/hello sh: /mnt/local/hello: Permission denied The same scenario works well with cinder in-tree plugin. The simple pod works well inder csi plugin. Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2020-11-29-133728 Steps to Reproduce: 1. Install OSP cluster and cinder csi driver is installed. 2. Create PVC with cinder csi driver and create deployment to consume this PVC, mounted volume is /mnt/local 3. Write something into mounted volume 4. Compared with In-tree plugin Actual results: Mounted volume have no the write access when using cinder csi driver. ------------------------------------- CSI driver case: $ oc create -f dep.yaml deployment.apps/mydeploy03 created persistentvolumeclaim/mydep-pvc03 created $ oc get pod NAME READY STATUS RESTARTS AGE mydeploy03-55f486b9f-cspkj 1/1 Running 0 16s $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mydep-pvc03 Bound pvc-f3b8cebc-f143-48b9-a7e0-b1f5077ac39b 1Gi RWO standard-csi 33s $ oc rsh mydeploy03-55f486b9f-cspkj sh-4.4$ id uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000 sh-4.4$ ls -l /mnt total 4 drwxr-xr-x. 3 root root 4096 Dec 1 07:15 local sh-4.4$ ls -ldZ /mnt/local drwxr-xr-x. 3 root root system_u:object_r:container_file_t:s0:c20,c25 4096 Dec 1 07:15 /mnt/local sh-4.4$ echo "Hello csi cinder" > /mnt/local/hello sh: /mnt/local/hello: Permission denied ------------------------------------- In-tree plugin $ oc create -f dep.yaml deployment.apps/mydeploy04 created persistentvolumeclaim/mydep-pvc04 created $ oc get pod NAME READY STATUS RESTARTS AGE mydeploy04-7f558d7578-t859d 1/1 Running 0 22s $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mydep-pvc04 Bound pvc-d9f4d6df-cb4d-494f-8a2b-8eeec8006f97 1Gi RWO standard 28s $ oc rsh mydeploy04-7f558d7578-t859d sh-4.4$ id uid=1000650000(1000650000) gid=0(root) groups=0(root),1000650000 sh-4.4$ cd /mnt sh-4.4$ ls -l total 4 drwxrwsr-x. 3 root 1000650000 4096 Dec 1 07:51 local sh-4.4$ ls -ldZ /mnt/local drwxrwsr-x. 3 root 1000650000 system_u:object_r:container_file_t:s0:c0,c26 4096 Dec 1 07:53 /mnt/local sh-4.4$ echo "Hello in-tree cinder" > /mnt/local/hello sh-4.4$ more /mnt/local/hello Hello in-tree cinder ------------------------------------- Something differet for mount info on node /dev/vdc on /var/lib/kubelet/pods/efa46102-e2f4-4478-9568-195f579d3675/volumes/kubernetes.io~cinder/pvc-d9f4d6df-cb4d-494f-8a2b-8eeec8006f97 type ext4 (rw,relatime,seclabel) /dev/vdc on /var/lib/kubelet/pods/efa46102-e2f4-4478-9568-195f579d3675/volumes/kubernetes.io~cinder/pvc-d9f4d6df-cb4d-494f-8a2b-8eeec8006f97 type ext4 (rw,relatime,seclabel) /dev/vdd on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f3b8cebc-f143-48b9-a7e0-b1f5077ac39b/globalmount type ext4 (rw,relatime,seclabel) /dev/vdd on /var/lib/kubelet/pods/4e63c8f8-a821-405e-8b81-6db9262294da/volumes/kubernetes.io~csi/pvc-f3b8cebc-f143-48b9-a7e0-b1f5077ac39b/mount type ext4 (rw,relatime,seclabel) /dev/vdd on /var/lib/kubelet/pods/4e63c8f8-a821-405e-8b81-6db9262294da/volumes/kubernetes.io~csi/pvc-f3b8cebc-f143-48b9-a7e0-b1f5077ac39b/mount type ext4 (rw,relatime,seclabel) ------------------------------------- Expected results: Mounted volume should have the write access when using cinder csi driver. Additional info: yaml file used: apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy03 spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: docker.io/aosqe/storage@sha256:a05b96d373be86f46e76817487027a7f5b8b5f87c0ac18a246b018df11529b40 ports: - containerPort: 80 volumeMounts: - name: local mountPath: /mnt/local volumes: - name: local persistentVolumeClaim: claimName: mydep-pvc03 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mydep-pvc03 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: standard-csi
The PV does not include fsType field, such as: csi: driver: cinder.csi.openstack.org volumeAttributes: storage.kubernetes.io/csiProvisionerIdentity: 1606875731732-8081-cinder.csi.openstack.org volumeHandle: e4c7dc47-0fba-4ed5-9d8e-d1b9fd152b72 If creating a new storageclass with the following parameter: parameters: csi.storage.k8s.io/fstype: ext4 Then the PV has the fsType field: driver: cinder.csi.openstack.org fsType: ext4 volumeAttributes: storage.kubernetes.io/csiProvisionerIdentity: 1606875731732-8081-cinder.csi.openstack.org volumeHandle: 8fe26576-cbbf-41ce-93b9-c7ca776a8d4e And the volume has write access right: $ oc rsh deployment-4-85fd89cc87-hljgx sh-4.4# ls -ldZ /mnt/storage/ drwxrwsr-x. 2 root 1000610001 system_u:object_r:container_file_t:s0:c5,c25 6 Dec 2 08:08 /mnt/storage/
Just add one more scenario for single pod test: Last test for a single pod in description session, I used the administrator user and the test passed. (Data can be written into pod) But when using a common user, there is no write access neither. The user/group for mounted dir is root/root and docker run as a specific user/group as deployment case.
Again, more yaml tweaking.
Reproduced on 4.7.0-0.nightly-2021-01-10-070949
Created and used the following storageclass, which 'fixes' the problem: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard-csi-fstype provisioner: cinder.csi.openstack.org reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: fsType: ext4
IIUC, the underlying cause of this issue is intentional behaviour by kubernetes: it doesn't set fsGroup during attach unless the volume meets a number of preconditions, one of which is an explicit fstype. This means that the implicit ext4 volume doesn't have its fsgroup set, while the explit ext4 volume does. Apparently this may be addressed in the future more completely by: https://kubernetes-csi.github.io/docs/support-fsgroup.html For now, we can fix this by making csi-provisioner add an explicit fstype when it is missing by adding '--default-fstype=ext4' to its args in openstack-cinder-csi-driver-controller. Patch to follow.
Verified pass on upgrade from 4.6.9 to 4.7.0-0.nightly-2021-01-21-172657. Deployment: $ oc create -f 04_deployment_pvc_sc.yaml deployment.apps/mydeploy03 created persistentvolumeclaim/mydep-pvc03 created $ oc get pod NAME READY STATUS RESTARTS AGE mydeploy03-55bfb56b4d-lkknd 1/1 Running 0 4m3s $ oc rsh mydeploy03-55bfb56b4d-lkknd sh-4.4# cd /mnt/local/ sh-4.4# touch a sh-4.4# echo "Hello csi cinder" > /mnt/local/hello sh-4.4# echo /mnt/local/hello /mnt/local/hello pod with non-admin user: $ oc rsh mypod sh-4.4$ id uid=1000630000(1000630000) gid=0(root) groups=0(root),1000630000 sh-4.4$ ls -ld /mnt/local/ drwxrwsr-x. 3 root 1000630000 4096 Jan 22 06:12 /mnt/local/ sh-4.4$ touch /mnt/local/test sh-4.4$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633