Bug 1583058
| Summary: | PV's readOnly parameter does not work | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Qin Ping <piqin> |
| Component: | Storage | Assignee: | Jan Safranek <jsafrane> |
| Status: | CLOSED UPSTREAM | QA Contact: | Chao Yang <chaoyang> |
| Severity: | low | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.10.0 | CC: | aos-bugs, aos-storage-staff, bchilds, jsafrane |
| Target Milestone: | --- | ||
| Target Release: | 3.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-09-09 13:49:22 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Read-only behaves the same as in filesystem volumes. It must be set in a pod definition:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
readOnly: true <------ HERE
You can check that the same PV definition with "volumeMode: Filesystem" will be mounted into a pod as read-write unless the pod specifies "readOnly: true".
So there is no specific Block bug. On the other hand, we may consider this as a bug in generic PV implementation - PV says "readOnly: true", but kubelet mounts / maps it into a container as RW.
Setting "readOnly" in Pod definition worked.
And yes, Filesystem volume has the same issue.
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
readOnly: true
# dd if=/dev/zero of=/dev/xvda bs=1M count=20
dd: failed to open '/dev/xvda': Operation not permitted
Ok, so let's rephrase this bug as "pv.spec.iscsi.readOnly flag does not work in PersistentVolumes". It applies to all other volume types and it's "known bug" (or feature?) in Kubernetes and since forever, see e.g. https://github.com/kubernetes/kubernetes/issues/61758#issuecomment-376506621 I'll look at it. But it's not a blocker for 3.10. There is an old issue upstream that did not get enough attention to fix this: https://github.com/kubernetes/kubernetes/issues/70503 The reason is that we cannot change the API of stable objects, even if the API does not actually do anything (like pv.spec.iscsi.readOnly). |
Description of problem: Set PV's readOnly=true, volumeMode=Block, but we still can write data to the volume map into container. Version-Release number of selected component (if applicable): oc v3.10.0-0.53.0 kubernetes v1.10.0+b81c8f8 openshift v3.10.0-0.53.0 How reproducible: Always Steps to Reproduce: 1. Enable feature gate: BlockVolume 2. Create a PV with volumeMode=true and readOnly=true 3. Create a PVC bound to the PV 4. Create a Pod using the PVC 5. Write data to the device which is the volume mapped into container Actual results: # dd if=/dev/zero of=/dev/xvda bs=1M count=20 20+0 records in 20+0 records out 20971520 bytes (21 MB, 20 MiB) copied, 0.0206752 s, 1.0 GB/s Expected results: Should report "permission denied" Master Log: Node Log (of failed PODs): PV Dump: # oc get pv block-pv -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/bound-by-controller: "yes" creationTimestamp: 2018-05-28T07:27:39Z finalizers: - kubernetes.io/pv-protection name: block-pv resourceVersion: "8751" selfLink: /api/v1/persistentvolumes/block-pv uid: 99faaa82-6248-11e8-8ec0-fa163e9eb52c spec: accessModes: - ReadWriteOnce capacity: storage: 5Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: block-pvc namespace: blockvolume resourceVersion: "8749" uid: 9d7de283-6248-11e8-8ec0-fa163e9eb52c iscsi: iqn: iqn.2016-04.test.com:storage.target00 iscsiInterface: default lun: 0 readOnly: true targetPortal: 172.30.49.192:3260 persistentVolumeReclaimPolicy: Retain volumeMode: Block status: phase: Bound PVC Dump: # oc get pvc block-pvc -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" creationTimestamp: 2018-05-28T07:27:45Z finalizers: - kubernetes.io/pvc-protection name: block-pvc namespace: blockvolume resourceVersion: "8753" selfLink: /api/v1/namespaces/blockvolume/persistentvolumeclaims/block-pvc uid: 9d7de283-6248-11e8-8ec0-fa163e9eb52c spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeMode: Block volumeName: block-pv status: accessModes: - ReadWriteOnce capacity: storage: 5Gi phase: Bound StorageClass Dump (if StorageClass used by PV/PVC): Additional info: # cat pod.yaml apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: - name: data devicePath: /dev/xvda volumes: - name: data persistentVolumeClaim: claimName: block-pvc