Bug 1298938 - Can't write to new cinder volumes
Summary: Can't write to new cinder volumes
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.1.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Sami Wagiaalla
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks: 1267746
TreeView+ depends on / blocked
 
Reported: 2016-01-15 13:07 UTC by Josep 'Pep' Turro Mauri
Modified: 2019-10-10 10:54 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-29 15:10:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Josep 'Pep' Turro Mauri 2016-01-15 13:07:44 UTC
Description of problem:

When accessing a cinder-backed volume for the first time (where the corresponding cinder volume does not contain a filesystem yet) it gets formatted and mounted; the resulting mount point is owned by root:root mode 0755, so the pod uid can't write to it.

Version-Release number of selected component (if applicable):
openshift v3.1.0.4-5-gebe80f5
kubernetes v1.1.0-origin-1107-g4c8e6f4

How reproducible:
Always

Steps to Reproduce:
1. Create a volume in cinder, note its ID (e.g. d0f3cda0-cf89-45ae-8a79-fb083f6884f2)
2. Create a PersistentVolume to describe the cinder volume, e.g.

[root@master ~]# cat registry-pv.yaml 
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: "registry" 
spec:
  capacity:
    storage: "25Gi" 
  accessModes:
    - "ReadWriteOnce"
  cinder: 
    fsType: "ext3" 
    volumeID: "d0f3cda0-cf89-45ae-8a79-fb083f6884f2"
[root@master ~]# oc create -f registry-pv.yaml

3. Create a PersistentVolumeClaim to use the above PV:

[root@master ~]# cat registry-pvc.json 
{
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
        "name": "registry"
    },
    "spec": {
        "accessModes": [ "ReadWriteOnce" ],
        "resources": {
            "requests": {
                "storage": "25Gi"
            }
        }
    }
}
[root@master ~]# oc create -f registry-pvc.json

4. Use that claim in a DC. Using the docker-registry as an example here, something like:
# oc volume dc/docker-registry --add --name=registry-storage -t pvc --claim-name=registry --overwrite

resulting in this in the registry's DC:

      volumes:
      - name: registry-storage
        persistentVolumeClaim:
          claimName: registry

5. Wait for the above DC to be deployed (trigger if needed)

Actual results:

Looking at the node where the pod is running we can see the volume:

    [root@node2 ~]# grep cinder/registry /proc/mounts
    /dev/vdc /var/lib/origin/openshift.local.volumes/pods/f4f3e79b-ae4d-11e5-9a3c-fa163e8e7483/volumes/kubernetes.io~cinder/registry ext3 rw,seclabel,relatime,data=ordered 0 0

and we can see that the filesystem that was created there has its mount point owned by root, mode 755:

    [root@node2 ~]# ls -la /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/cinder/mounts/d0f3cda0-cf89-45ae-8a79-fb083f6884f2
    total 20
    drwxr-xr-x. 3 root root  4096 29 des 11:59 .
    drwxr-x---. 3 root root    49 29 des 12:03 ..
    drwx------. 2 root root 16384 29 des 11:59 lost+found

As a result, the pod can't write there:

  time="2015-12-30T04:38:07-05:00" level=error msg="An error occured" err.code=UNKNOWN err.detail="mkdir /registry/docker: permission denied" ...

Expected results:

Pods from the DC that has the RWO PVC can write to their new volume

Comment 2 Sami Wagiaalla 2016-01-15 22:01:28 UTC
Hi Josep,

I have tracked this down with the help of Paul Weil

The way the permissions problem is solved is through the use of fsGroup. However the automatic assignment of fsGroups to pods was turned off in 3.1.

to work around the above issue you could try manually adding an fsGroup to your DC

oc edit dc docker-registry
change the pod level security context from:
      securityContext: {}
to
      securityContext:
        fsGroup: 1234

wait for the dc's pods to redeploy and your cinder volume should not be owned by the group 1234 and writable by that group.

Comment 3 Sami Wagiaalla 2016-01-29 15:10:36 UTC
This has been marked as NEEDINFO for a while. I am going to close it.

Josep, if you find that the above is not working for you please open again.

Comment 4 Marcel Wysocki 2016-03-31 09:55:20 UTC
I am experiencing the same issue on OSE 3.1.1.6 using the cinder backend

Comment 5 Sami Wagiaalla 2016-03-31 23:35:29 UTC
Hi Marcel,

Does comment #2 help ?

Comment 6 Marcel Wysocki 2016-04-04 08:25:03 UTC
It does, but its just a bad user experience :(

Comment 7 Sami Wagiaalla 2016-04-04 13:55:43 UTC
Marcel,

To enable automatic fsGroup assignment:
  oc get -o json pod | grep scc # get scc name
  oc edit <scc name>
  #set fsGroup type to MustRunAs instead of RunAsAny

This should be on by default in OSE 3.2 and later

Comment 8 Sami Wagiaalla 2016-05-02 13:44:59 UTC
*** Bug 1331730 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.