Description of problem: Don't set defaultMode, set defaultMode without value, or set with any valid permission, once the pod can be created and run, all the files created by the secret volume mount have permission 2777. Version-Release number of selected component (if applicable): openshift v1.4.0-alpha.0+3687062 kubernetes v1.4.0+776c994 etcd 3.1.0-alpha.1 How reproducible: Always Steps to Reproduce: 1. Create a secret # cat secret.yaml apiVersion: v1 kind: Secret metadata: name: test-secret data: data-1: dmFsdWUtMQ0K data-2: dmFsdWUtMg0KDQo= # oc create -f secret.yaml 2. Create a pod with permission mode in secret volume # cat secret-permission-pod.yaml apiVersion: v1 kind: Pod metadata: name: secret-permission-pod spec: containers: - name: secret-permission-pod image: redis volumeMounts: - name: foo-volume mountPath: "/etc/foo" volumes: - name: foo-volume secret: secretName: test-secret defaultMode: 0744 # oc create -f secret-permission-pod.yaml 3. Check permission of files in secret volume # oc exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" Actual results: [root@ip-172-18-9-41 ~]# oc exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 2777 Expected results: [root@ip-172-18-9-41 ~]# exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 744 It should have the same permission with specified defaultMode. If don't specify defaultMode, 0644 is used by default. Additional info:
*** This bug has been marked as a duplicate of bug 1387306 ***
Reopening as the fix for 1387306 did not include a fix for this issue.
I can't recreate this in upstream Kube or Openshift. Using the pod and secret spec provided in the description: === Kube w/ defaultMode === $ kubectl create -f secret.yaml secret "test-secret" created $ kubectl create -f pod.yaml <-- with defaultMode 0744 pod "secret-permission-pod" created $ kubectl exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 744 === Kube w/o defaultMode === $ kubectl create -f secret.yaml secret "test-secret" created $ kubectl create -f pod.yaml <-- no defaultMode pod "secret-permission-pod" created $ kubectl exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 644 == Openshift w/ defaultMode === $ oc create -f secret.yaml secret "test-secret" created $ oc create -f pod.yaml pod "secret-permission-pod" created $ oc exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 744 Possible that this has already been fixed. Can you confirm?
The problem can't be reproduced on OCP 3.4.0.26 (openshift v3.4.0.26+f7e109e, kubernetes v1.4.0+776c994, etcd 3.1.0-rc.0) or Kubernetes v1.6.0-alpha.0.551+8fefda3bc3c134 But it is still exists on OpenShift Origin (devenv-rhel7_5364, openshift v1.4.0-alpha.1+7412a0e-193, kubernetes v1.4.0+776c994, etcd 3.1.0-rc.0) Openshift with/without defaultMode [root@ip-172-18-12-68 home]# oc exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 777
I tried the exact Origin version and secret/pod spec specified and can't reproduce. with defaultMode $ oc exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 744 without defaultMode $ oc exec -it secret-permission-pod -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 644 $ oc version oc v1.4.0-alpha.1+7412a0e-193 kubernetes v1.4.0+776c994 features: Basic-Auth Server https://10.42.10.23:8443 openshift v1.4.0-alpha.1+7412a0e-193 kubernetes v1.4.0+776c994 If you can reproduce this, it must be something environmental. Can I get access to the machine on which you can reproduce this?
The reason is I accessed OpenShift Origin and OCP with different authorities. Let's take OCP for example. Access with "oc login" and create project "qwang1". Directly access master and create project "qwang4". Comparison: [root@host-8-174-41 home]# oc exec -it secret-permission-pod -n qwang1 -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 777 [root@host-8-174-41 home]# oc exec -it secret-permission-pod -n qwang4 -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 744 [root@host-8-174-41 home]# oc get pod secret-permission-pod -n qwang1 -o yaml | grep scc openshift.io/scc: restricted [root@host-8-174-41 home]# oc get pod secret-permission-pod -n qwang4 -o yaml | grep scc openshift.io/scc: anyuid [root@host-8-174-41 home]# oc get pod secret-permission-pod -n qwang1 -o yaml | grep fsGroup fsGroup: 1001130000 [root@host-8-174-41 home]# oc get pod secret-permission-pod -n qwang4 -o yaml | grep fsGroup [root@host-8-174-41 home]# Command "oadm policy add-scc-to-user anyuid $user" can add anyuid scc to the user, then create pod, I can get expected defaultMode permission under "qwang1". So here are two questions come up: Q1: Why the defaultMode always turns into 484 with/without fsGroup? [root@host-8-174-41 home]# oc get pod secret-permission-pod -o yaml <-------------------> volumes: - name: foo-volume secret: defaultMode: 484 secretName: test-secret <-------------------> Q2: Why the secret file permission is 777 if the pod has fsGroup? Details on http://pastebin.test.redhat.com/431059 for your reference.
Ah ok, thanks for the explanation! I was running as admin. I verified that, for non-admin openshift users, a security context is added to the pod specs that includes an fsGroup. When fsGroup is specified, defaultMode is ignored. This should not be. Investigating.
Upstream PR: https://github.com/kubernetes/kubernetes/pull/37009
Origin PR: https://github.com/openshift/origin/pull/11959 Origin 1.4 PR: https://github.com/openshift/origin/pull/11960 Both have merged
Verified in origin, the issue has been fixed. # openshift version openshift v1.5.0-alpha.1+9e682de-55 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 1. oc login 2. Add login user to "anyuid" policy # oadm policy add-scc-to-user anyuid chezhang 3. Create a pod with downward API by default permission mode in volume # oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/pods/permission-data/dapi-permission-pod.yaml pod "dapi-permission-pod" created # oc get pod NAME READY STATUS RESTARTS AGE dapi-permission-pod 1/1 Running 0 9s 4. Login pod and check permission of files in downward API volume # oc exec -it dapi-permission-pod -- sh -c "stat -c %a /var/tmp/podinfo" 1777 # oc exec -it dapi-permission-pod -- sh -c "cat /var/tmp/podinfo/..data/labels; echo; stat -c %a /var/tmp/podinfo/..data/labels" rack="a111" region="r1" zone="z11" 400 # oc exec -it dapi-permission-pod -- sh -c "cat /var/tmp/podinfo/..data/annotations; echo; stat -c %a /var/tmp/podinfo/..data/annotations" build="one" builder="qe-one" kubernetes.io/config.seen="2017-01-06T00:43:31.423056666-05:00" kubernetes.io/config.source="api" openshift.io/scc="anyuid" 400
Furthermore, permission mode work well in "privileged" policy with fsGroup in origin. 1. oc login 2. Create a pod with downward API, and set different permission for different files in volume $ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/pods/permission-data/dapi-keys-permission-pod.yaml pod "dapi-keys-permission-pod" created $ oc get pod NAME READY STATUS RESTARTS AGE dapi-keys-permission-pod 1/1 Running 0 13 3. By default, user in "privileged" policy with fsGroup $ oc get pod dapi-keys-permission-pod -o yaml | grep -E "scc|fsGroup" openshift.io/scc: restricted fsGroup: 1000190000 4. Login pod and check permission of files in downward API volume $ oc exec -it dapi-keys-permission-pod -- sh -c "stat -c %a /var/tmp/podinfo" 3777 $ oc exec -it dapi-keys-permission-pod -- sh -c "cat /var/tmp/podinfo/..data/labels; echo; stat -c %a /var/tmp/podinfo/..data/labels" rack="a111" region="r1" zone="z11" 440 $ oc exec -it dapi-keys-permission-pod -- sh -c "cat /var/tmp/podinfo/..data/annotations; echo; stat -c %a /var/tmp/podinfo/..data/annotations" build="one" builder="qe-one" kubernetes.io/config.seen="2017-01-06T01:48:00.289886375-05:00" kubernetes.io/config.source="api" openshift.io/scc="restricted" 541
Tested on Origin devenv-rhel7_5684 (openshift v1.5.0-alpha.1+b53352c-251, kubernetes v1.5.0-beta.2+225eecc, etcd 3.1.0-rc.0) and OCP (openshift v3.5.0.4+86a6117, kubernetes v1.5.0-beta.2+225eecc, etcd 3.1.0-rc.0), the bug has been fixed. Thanks. I will verify the bug once it is ON_QA.
Looks like this fell out of the pipeline. Sound like it is verified but need QA to move it.
Tested on OCP 3.6(openshift v3.6.136, kubernetes v1.6.1+5115d708d7, etcd 3.2.1). From the following test results, it seems read permission is set to u and g by default. I didn't find related description in the document. Is it expected? ================> Correct # oc exec -it secret-permission-pod-0744 -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 744 # oc exec -it secret-permission-pod-0744 -- sh -c "ls -l /etc/foo/..data/data-1" -rwxr--r--. 1 root 1000100000 9 Jul 7 06:48 /etc/foo/..data/data-1 # oc exec -it secret-permission-pod-default -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 644 # oc exec -it secret-permission-pod-default -- sh -c "ls -l /etc/foo/..data/data-1" -rw-r--r--. 1 root 1000100000 9 Jul 7 06:48 /etc/foo/..data/data-1 ================> u+r, g+r by default # oc exec -it secret-permission-pod-0400 -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 440 # oc exec -it secret-permission-pod-0400 -- sh -c "ls -l /etc/foo/..data/data-1" -r--r-----. 1 root 1000100000 9 Jul 7 06:49 /etc/foo/..data/data-1 # oc exec -it secret-permission-pod-0420 -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 460 # oc exec -it secret-permission-pod-0420 -- sh -c "ls -l /etc/foo/..data/data-1" -r--rw----. 1 root 1000100000 9 Jul 7 07:02 /etc/foo/..data/data-1 # oc exec -it secret-permission-pod-0321 -- sh -c "cat /etc/foo/..data/data-1; stat -c %a /etc/foo/..data/data-1" value-1 761 # oc exec -it secret-permission-pod-0321 -- sh -c "ls -l /etc/foo/..data/data-1" -rwxrw---x. 1 root 1000100000 9 Jul 7 07:37 /etc/foo/..data/data-1
Yes, this is expected. When using fsGroup, the file needs to be readable by the group. Thus the defaultMode is ORed with 0440. This explains your observed results. https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/volume/volume_linux.go#L80-L92 There is the outstanding issue of the default mode not being what was set in the spec file when reading back the pod spec from the API server. You mentioned this in comment 8. volumes: - name: foo-volume secret: defaultMode: 484 secretName: test-secret The defaultMode in the pod spec was 0744. Can you could open a different bug for that and please set the severity to low as it does not really cause any functional issues as the mode is set correctly in the actual container. Just messy aesthetically.
The outstanding issue has been solved. Now the defaultMode in the pod spec is right. Could you move it to ON_QA so I can move forward? Octal: 644 ---> Decimal: 420 # oc get pod secret-permission-pod-default -o yaml | grep -A4 volumes volumes: - name: foo-volume secret: defaultMode: 420 secretName: test-secret Octal: 744 ---> Decimal: 484 # oc get pod secret-permission-pod-0744 -o yaml | grep -A4 volumes volumes: - name: foo-volume secret: defaultMode: 484 secretName: test-secret Octal: 440 ---> Decimal: 288 # oc get pod secret-permission-pod-0440 -o yaml | grep -A4 volumes volumes: - name: foo-volume secret: defaultMode: 288 secretName: test-secret
back to QA after clarification
Has this been fixed in any releases?
It is in 3.5 and later.