Description of problem: When accessing a cinder-backed volume for the first time (where the corresponding cinder volume does not contain a filesystem yet) it gets formatted and mounted; the resulting mount point is owned by root:root mode 0755, so the pod uid can't write to it. Version-Release number of selected component (if applicable): openshift v3.1.0.4-5-gebe80f5 kubernetes v1.1.0-origin-1107-g4c8e6f4 How reproducible: Always Steps to Reproduce: 1. Create a volume in cinder, note its ID (e.g. d0f3cda0-cf89-45ae-8a79-fb083f6884f2) 2. Create a PersistentVolume to describe the cinder volume, e.g. [root@master ~]# cat registry-pv.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "registry" spec: capacity: storage: "25Gi" accessModes: - "ReadWriteOnce" cinder: fsType: "ext3" volumeID: "d0f3cda0-cf89-45ae-8a79-fb083f6884f2" [root@master ~]# oc create -f registry-pv.yaml 3. Create a PersistentVolumeClaim to use the above PV: [root@master ~]# cat registry-pvc.json { "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "registry" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "25Gi" } } } } [root@master ~]# oc create -f registry-pvc.json 4. Use that claim in a DC. Using the docker-registry as an example here, something like: # oc volume dc/docker-registry --add --name=registry-storage -t pvc --claim-name=registry --overwrite resulting in this in the registry's DC: volumes: - name: registry-storage persistentVolumeClaim: claimName: registry 5. Wait for the above DC to be deployed (trigger if needed) Actual results: Looking at the node where the pod is running we can see the volume: [root@node2 ~]# grep cinder/registry /proc/mounts /dev/vdc /var/lib/origin/openshift.local.volumes/pods/f4f3e79b-ae4d-11e5-9a3c-fa163e8e7483/volumes/kubernetes.io~cinder/registry ext3 rw,seclabel,relatime,data=ordered 0 0 and we can see that the filesystem that was created there has its mount point owned by root, mode 755: [root@node2 ~]# ls -la /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/cinder/mounts/d0f3cda0-cf89-45ae-8a79-fb083f6884f2 total 20 drwxr-xr-x. 3 root root 4096 29 des 11:59 . drwxr-x---. 3 root root 49 29 des 12:03 .. drwx------. 2 root root 16384 29 des 11:59 lost+found As a result, the pod can't write there: time="2015-12-30T04:38:07-05:00" level=error msg="An error occured" err.code=UNKNOWN err.detail="mkdir /registry/docker: permission denied" ... Expected results: Pods from the DC that has the RWO PVC can write to their new volume
Hi Josep, I have tracked this down with the help of Paul Weil The way the permissions problem is solved is through the use of fsGroup. However the automatic assignment of fsGroups to pods was turned off in 3.1. to work around the above issue you could try manually adding an fsGroup to your DC oc edit dc docker-registry change the pod level security context from: securityContext: {} to securityContext: fsGroup: 1234 wait for the dc's pods to redeploy and your cinder volume should not be owned by the group 1234 and writable by that group.
This has been marked as NEEDINFO for a while. I am going to close it. Josep, if you find that the above is not working for you please open again.
I am experiencing the same issue on OSE 3.1.1.6 using the cinder backend
Hi Marcel, Does comment #2 help ?
It does, but its just a bad user experience :(
Marcel, To enable automatic fsGroup assignment: oc get -o json pod | grep scc # get scc name oc edit <scc name> #set fsGroup type to MustRunAs instead of RunAsAny This should be on by default in OSE 3.2 and later
*** Bug 1331730 has been marked as a duplicate of this bug. ***