Bug 1298938 - Can't write to new cinder volumes
Can't write to new cinder volumes
Status: CLOSED NOTABUG
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage (Show other bugs)
3.1.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Sami Wagiaalla
Jianwei Hou
:
Depends On:
Blocks: 1267746
  Show dependency treegraph
 
Reported: 2016-01-15 08:07 EST by Josep 'Pep' Turro Mauri
Modified: 2017-07-03 11:31 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-29 10:10:36 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Josep 'Pep' Turro Mauri 2016-01-15 08:07:44 EST
Description of problem:

When accessing a cinder-backed volume for the first time (where the corresponding cinder volume does not contain a filesystem yet) it gets formatted and mounted; the resulting mount point is owned by root:root mode 0755, so the pod uid can't write to it.

Version-Release number of selected component (if applicable):
openshift v3.1.0.4-5-gebe80f5
kubernetes v1.1.0-origin-1107-g4c8e6f4

How reproducible:
Always

Steps to Reproduce:
1. Create a volume in cinder, note its ID (e.g. d0f3cda0-cf89-45ae-8a79-fb083f6884f2)
2. Create a PersistentVolume to describe the cinder volume, e.g.

[root@master ~]# cat registry-pv.yaml 
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: "registry" 
spec:
  capacity:
    storage: "25Gi" 
  accessModes:
    - "ReadWriteOnce"
  cinder: 
    fsType: "ext3" 
    volumeID: "d0f3cda0-cf89-45ae-8a79-fb083f6884f2"
[root@master ~]# oc create -f registry-pv.yaml

3. Create a PersistentVolumeClaim to use the above PV:

[root@master ~]# cat registry-pvc.json 
{
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
        "name": "registry"
    },
    "spec": {
        "accessModes": [ "ReadWriteOnce" ],
        "resources": {
            "requests": {
                "storage": "25Gi"
            }
        }
    }
}
[root@master ~]# oc create -f registry-pvc.json

4. Use that claim in a DC. Using the docker-registry as an example here, something like:
# oc volume dc/docker-registry --add --name=registry-storage -t pvc --claim-name=registry --overwrite

resulting in this in the registry's DC:

      volumes:
      - name: registry-storage
        persistentVolumeClaim:
          claimName: registry

5. Wait for the above DC to be deployed (trigger if needed)

Actual results:

Looking at the node where the pod is running we can see the volume:

    [root@node2 ~]# grep cinder/registry /proc/mounts
    /dev/vdc /var/lib/origin/openshift.local.volumes/pods/f4f3e79b-ae4d-11e5-9a3c-fa163e8e7483/volumes/kubernetes.io~cinder/registry ext3 rw,seclabel,relatime,data=ordered 0 0

and we can see that the filesystem that was created there has its mount point owned by root, mode 755:

    [root@node2 ~]# ls -la /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/cinder/mounts/d0f3cda0-cf89-45ae-8a79-fb083f6884f2
    total 20
    drwxr-xr-x. 3 root root  4096 29 des 11:59 .
    drwxr-x---. 3 root root    49 29 des 12:03 ..
    drwx------. 2 root root 16384 29 des 11:59 lost+found

As a result, the pod can't write there:

  time="2015-12-30T04:38:07-05:00" level=error msg="An error occured" err.code=UNKNOWN err.detail="mkdir /registry/docker: permission denied" ...

Expected results:

Pods from the DC that has the RWO PVC can write to their new volume
Comment 2 Sami Wagiaalla 2016-01-15 17:01:28 EST
Hi Josep,

I have tracked this down with the help of Paul Weil

The way the permissions problem is solved is through the use of fsGroup. However the automatic assignment of fsGroups to pods was turned off in 3.1.

to work around the above issue you could try manually adding an fsGroup to your DC

oc edit dc docker-registry
change the pod level security context from:
      securityContext: {}
to
      securityContext:
        fsGroup: 1234

wait for the dc's pods to redeploy and your cinder volume should not be owned by the group 1234 and writable by that group.
Comment 3 Sami Wagiaalla 2016-01-29 10:10:36 EST
This has been marked as NEEDINFO for a while. I am going to close it.

Josep, if you find that the above is not working for you please open again.
Comment 4 Marcel Wysocki 2016-03-31 05:55:20 EDT
I am experiencing the same issue on OSE 3.1.1.6 using the cinder backend
Comment 5 Sami Wagiaalla 2016-03-31 19:35:29 EDT
Hi Marcel,

Does comment #2 help ?
Comment 6 Marcel Wysocki 2016-04-04 04:25:03 EDT
It does, but its just a bad user experience :(
Comment 7 Sami Wagiaalla 2016-04-04 09:55:43 EDT
Marcel,

To enable automatic fsGroup assignment:
  oc get -o json pod | grep scc # get scc name
  oc edit <scc name>
  #set fsGroup type to MustRunAs instead of RunAsAny

This should be on by default in OSE 3.2 and later
Comment 8 Sami Wagiaalla 2016-05-02 09:44:59 EDT
*** Bug 1331730 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.