Bug 1276038 - Can't access files from the downward API volume
Summary: Can't access files from the downward API volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.0.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Paul Morie
QA Contact: Qixuan Wang
URL:
Whiteboard:
Depends On:
Blocks: 1267746
TreeView+ depends on / blocked
 
Reported: 2015-10-28 13:32 UTC by Josep 'Pep' Turro Mauri
Modified: 2019-11-14 07:05 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-12 16:24:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
atomic-openshift-node logs (460.46 KB, text/plain)
2016-02-16 07:45 UTC, Qixuan Wang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:1064 0 normal SHIPPED_LIVE Important: Red Hat OpenShift Enterprise 3.2 security, bug fix, and enhancement update 2016-05-12 20:19:17 UTC

Description Josep 'Pep' Turro Mauri 2015-10-28 13:32:31 UTC
Description of problem:

non-privileged pods (running as normal/random uid) can not access data from a downward API volume

Version-Release number of selected component (if applicable):

openshift v3.0.2.0-17-g701346b
kubernetes v1.1.0-alpha.0-1605-g44c91b1


How reproducible:
Always

Steps to Reproduce:

Using one of the default example templates here (cakephp) and the docs for the downward API:
https://access.redhat.com/documentation/en/openshift-enterprise/version-3.0/openshift-enterprise-30-developer-guide/#dapi-using-volume-plugin

1. Create an app: oc new-app cakephp-example
2. Edit its deploymentConfig and add a volume for downward API data to the pod spec (spec.template.spec):

   a) inside the container spec add:

        volumeMounts:
        - mountPath: /var/tmp/podinfo
          name: podinfo

   b) and in the pod spec:

      volumes:
      - metadata:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            name: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            name: annotations
        name: podinfo

3. Deploy and try to access the data from inside the container

Actual results:

$ oc exec cakephp-example-2-n6bfr cat /var/tmp/podinfo/labels
cat: /var/tmp/podinfo/labels: Permission denied
error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1

Expected results:

It should be possible to acess these files

Additional info:

The timestamped dir that contains the files was created with mode 0700 and owned by root:

$ oc rsh cakephp-example-2-n6bfr
bash-4.2$ id
uid=1000030000 gid=0(root)
bash-4.2$ cd /var/tmp/podinfo
bash-4.2$ ls -la
total 4
drwxrwxrwt. 3 root root  120 Oct 28 09:26 .
drwxrwxrwt. 3 root root 4096 Oct 28 09:26 ..
drwx------. 2 root root   80 Oct 28 09:26 .2015_10_28_09_26_43637488077
lrwxrwxrwx. 1 root root   29 Oct 28 09:26 .current -> .2015_10_28_09_26_43637488077
lrwxrwxrwx. 1 root root   20 Oct 28 09:26 annotations -> .current/annotations
lrwxrwxrwx. 1 root root   15 Oct 28 09:26 labels -> .current/labels

Comment 1 Dan Mace 2015-10-28 18:22:13 UTC
Reassigning to Storage since this isn't a deployment issue.

Comment 3 Andy Goldstein 2015-10-29 15:48:36 UTC
What is the selinux label of your node's volume dir?

Comment 4 Josep 'Pep' Turro Mauri 2015-10-29 16:00:23 UTC
I don't think it's a selinux issue, just raw permissions/ownership at play here: the data is put into a directory with a timestamp (symlinked from .current) which is owned by root, mode 0700.

Some more data just in case: looking at the volume from the node:

[root@ose-node-compute-2d795 ~]# mount | grep podinfo
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/87deaeee-7d77-11e5-a033-525400d9bd5d/volumes/kubernetes.io~metadata/podinf  type tmpfs (rw,relatime,rootcontext=system_u:object_r:svirt_sandbox_file_t:s0,seclabel)

[root@ose-node-compute-2d795 ~]# cd /var/lib/openshift/openshift.local.volumes/pods/87deaeee-7d77-11e5-a033-525400d9bd5d/volumes/kubernetes.io~metadata/podinfo/
[root@ose-node-compute-2d795 podinfo]# ls -laZ
drwxrwxrwt. root root system_u:object_r:svirt_sandbox_file_t:s0 .
drwxr-xr-x. root root system_u:object_r:svirt_sandbox_file_t:s0 ..
drwx------. root root system_u:object_r:svirt_sandbox_file_t:s0 .2015_10_29_05_56_06909120942
lrwxrwxrwx. root root system_u:object_r:svirt_sandbox_file_t:s0 annotations -> .current/annotations
lrwxrwxrwx. root root system_u:object_r:svirt_sandbox_file_t:s0 .current -> .2015_10_29_05_56_06909120942
lrwxrwxrwx. root root system_u:object_r:svirt_sandbox_file_t:s0 labels -> .current/labels

SELinux labels look correct, right?

Trying to access it from the pod:

$ oc rsh cakephp-example-2-n6bfr
bash-4.2$ cd /var/tmp/podinfo
bash-4.2$ ls -la
total 4
drwxrwxrwt. 3 root root  120 Oct 29 05:56 .
drwxrwxrwt. 3 root root 4096 Oct 29 05:56 ..
drwx------. 2 root root   80 Oct 29 05:56 .2015_10_29_05_56_06909120942
lrwxrwxrwx. 1 root root   29 Oct 29 05:56 .current -> .2015_10_29_05_56_06909120942
lrwxrwxrwx. 1 root root   20 Oct 29 05:56 annotations -> .current/annotations
lrwxrwxrwx. 1 root root   15 Oct 29 05:56 labels -> .current/labels
bash-4.2$ cat labels
cat: labels: Permission denied

The node did not record deinals about that:

[root@ose-node-compute-2d795 podinfo]# ausearch -m avc -ts recent
<no matches>

Comment 6 Andy Goldstein 2015-11-03 15:11:13 UTC
Proposed upstream fix: https://github.com/kubernetes/kubernetes/pull/16614

Comment 7 Paul Morie 2015-11-06 17:58:49 UTC
The root issue here is that some volumes are managed by the kubelet but are read-only to pods.  The linked PR adds a way to account for the distinction and appropriately manage ownership on these types of volumes.

Comment 8 Paul Morie 2016-01-05 14:10:34 UTC
Should be fixed now.

Comment 9 Qixuan Wang 2016-01-06 03:08:30 UTC
Need rebase to origin to verify it. So move the bug to MODIFIED temporarily.

Comment 10 Andy Goldstein 2016-01-12 14:21:46 UTC
Not a 3.1.1 blocker. It will be delivered with the next Kube rebase into Origin.

Comment 11 Paul Morie 2016-02-03 21:50:15 UTC
Should be in master now.

Comment 12 Qixuan Wang 2016-02-05 07:15:10 UTC
Refer to https://bugzilla.redhat.com/show_bug.cgi?id=1277092

Comment 13 Qixuan Wang 2016-02-14 13:32:34 UTC
Tested with the latest puddle: atomic-openshift-3.1.1.901-1.git.0.1aee00d.el7.x86_64

# openshift version
openshift v3.1.1.901
kubernetes v1.2.0-origin
etcd 2.2.2+git

The problem has been fixed on origin but still here:

# oc rsh pod-dapi-volume
bash-4.2$ ls -laR /var/tmp/podinfo/
/var/tmp/podinfo/:
total 0
drwxrwxrwt. 3 root root 120 Feb 14 06:27 .
drwxrwxrwt. 3 root root  20 Feb 14 06:28 ..
drwx------. 2 root root  80 Feb 14 06:27 ..2016_02_14_19_27_45033058333
lrwxrwxrwx. 1 root root  30 Feb 14 06:27 ..downwardapi -> ..2016_02_14_19_27_45033058333
lrwxrwxrwx. 1 root root  25 Feb 14 06:27 annotations -> ..downwardapi/annotations
lrwxrwxrwx. 1 root root  20 Feb 14 06:27 labels -> ..downwardapi/labels
ls: cannot open directory /var/tmp/podinfo/..2016_02_14_19_27_45033058333: Permission denied

Here is the pod spec file.
# cat pod-dapi-volume.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-dapi-volume
  labels:
    region: r1
    zone: z11
    rack: a111
  annotations:
    build: one
    builder: qe-one
spec:
  securityContext:
    fsGroup: 1234
  containers:
    - name: client-container
      image: openshift3/ruby-20-rhel7
      command: ["sh", "-c", "while true; do if [[ -e /var/tmp/podinfo/labels ]]; then cat /var/tmp/podinfo/labels; fi; if [[ -e /var/tmp/podinfo/annotations ]]; then cat /var/tmp/podinfo/annotations; fi; sleep 5; done"] 
      securityContext:
        privileged: false
      volumeMounts:
        - name: podinfo
          mountPath: /var/tmp/podinfo
          readOnly: false
  volumes:
    - name: podinfo
      metadata:
        items:
          - name: "labels"
            fieldRef:
              fieldPath: metadata.labels
          - name: "annotations"
            fieldRef:
              fieldPath: metadata.annotations

Comment 14 Paul Morie 2016-02-15 18:23:49 UTC
It looks like the code changes should have made it into this version.  I'm seeing if I can recreate now.

Comment 15 Paul Morie 2016-02-15 18:48:22 UTC
I ran the E2E test for this against a 3.1.1.901 cluster and it passed.  So, would you please reproduce this and include:

1.  openshift-node logs
2.  output of `oc get <pod name> -o yaml`, note, this will be different than the pod descriptor you submit
3.  output of `oc describe <pod name>`

That will help us track this down.  Thanks!

Comment 16 Paul Morie 2016-02-16 00:45:26 UTC
Changing needinfo for this, sorry Pep.

Comment 17 Paul Morie 2016-02-16 04:28:25 UTC
When you recreate, please use --loglevel=5

Comment 18 Qixuan Wang 2016-02-16 07:45:36 UTC
Created attachment 1127499 [details]
atomic-openshift-node logs

Comment 19 Qixuan Wang 2016-02-16 07:52:17 UTC
Tested with the latest puddle: atomic-openshift-3.1.1.902-1.git.0.d625c01.el7.x86_64

# openshift version
openshift v3.1.1.902
kubernetes v1.2.0-origin
etcd 2.2.2+git

Kept OPTIONS=--loglevel=5 on /etc/sysconfig/atomic-openshift-node

I noticed that "fsGroup" is not wrote into securityContext which is different with that against Origin. 

OSE/AEP:
securityContext: {}

Origin:
securityContext:
    fsGroup: 1234
    seLinuxOptions:
      level: s0:c6,c0


Node log is attached. Here are more infos.
# oc get pod pod-dapi-volume -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    build: one
    builder: qe-one
    openshift.io/scc: restricted
  creationTimestamp: 2016-02-16T07:21:59Z
  labels:
    rack: a111
    region: r1
    zone: z11
  name: pod-dapi-volume
  namespace: qwang1
  resourceVersion: "2138"
  selfLink: /api/v1/namespaces/qwang1/pods/pod-dapi-volume
  uid: f7bed65e-d47d-11e5-9ee2-fa163ee7299c
spec:
  containers:
  - command:
    - sh
    - -c
    - while true; do if [[ -e /var/tmp/podinfo/labels ]]; then cat /var/tmp/podinfo/labels;
      fi; if [[ -e /var/tmp/podinfo/annotations ]]; then cat /var/tmp/podinfo/annotations;
      fi; sleep 5; done
    image: openshift3/ruby-20-rhel7
    imagePullPolicy: IfNotPresent
    name: client-container
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
        - SYS_CHROOT
      privileged: false
      runAsUser: 1000040000
      seLinuxOptions:
        level: s0:c6,c5
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /var/tmp/podinfo
      name: podinfo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-0ufgv
      readOnly: true
  dnsPolicy: ClusterFirst
  host: openshift-148.lab.eng.nay.redhat.com
  imagePullSecrets:
  - name: default-dockercfg-ufek6
  nodeName: openshift-148.lab.eng.nay.redhat.com
  restartPolicy: Always
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - downwardAPI:
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels
        path: labels
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations
        path: annotations
    metadata:
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels
        name: labels
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations
        name: annotations
    name: podinfo
  - name: default-token-0ufgv
    secret:
      secretName: default-token-0ufgv
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-02-16T07:22:03Z
    status: "True"
    type: Ready
  containerStatuses:
  - containerID: docker://b7d01010da9a9815928ef759bff0207acaf9205c36496ea2447b5093ed6d4b53
    image: openshift3/ruby-20-rhel7
    imageID: docker://be90266e447e20b841eb19f856dcc616a7cbf23ecdd9779d0206aa140b632031
    lastState: {}
    name: client-container
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-02-16T07:22:03Z
  hostIP: 10.66.79.148
  phase: Running
  podIP: 10.2.0.5
  startTime: 2016-02-16T07:21:59Z


# oc describe pod pod-dapi-volume
Name:				pod-dapi-volume
Namespace:			qwang1
Image(s):			openshift3/ruby-20-rhel7
Node:				openshift-148.lab.eng.nay.redhat.com/10.66.79.148
Start Time:			Tue, 16 Feb 2016 15:21:59 +0800
Labels:				rack=a111,region=r1,zone=z11
Status:				Running
Reason:				
Message:			
IP:				10.2.0.5
Replication Controllers:	<none>
Containers:
  client-container:
    Container ID:	docker://b7d01010da9a9815928ef759bff0207acaf9205c36496ea2447b5093ed6d4b53
    Image:		openshift3/ruby-20-rhel7
    Image ID:		docker://be90266e447e20b841eb19f856dcc616a7cbf23ecdd9779d0206aa140b632031
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Tue, 16 Feb 2016 15:22:03 +0800
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  podinfo:
  <Volume Type Not Found>
  default-token-0ufgv:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-0ufgv
Events:
  FirstSeen	LastSeen	Count	From						SubobjectPath				Reason		Message
  ─────────	────────	─────	────						─────────────				──────		───────
  57s		57s		1	{default-scheduler }									Scheduled	Successfully assigned pod-dapi-volume to openshift-148.lab.eng.nay.redhat.com
  55s		55s		1	{kubelet openshift-148.lab.eng.nay.redhat.com}	spec.containers{client-container}	Pulled		Container image "openshift3/ruby-20-rhel7" already present on machine
  54s		54s		1	{kubelet openshift-148.lab.eng.nay.redhat.com}	spec.containers{client-container}	Created		Created container with docker id b7d01010da9a
  53s		53s		1	{kubelet openshift-148.lab.eng.nay.redhat.com}	spec.containers{client-container}	Started		Started container with docker id b7d01010da9a

Comment 20 Paul Morie 2016-02-16 15:59:42 UTC
You need to have fsGroup in the pod's security context for this to work as a non-root UID.  Try running with the 'anyuid' SCC and setting fsGroup to 1234 manually.

Comment 21 Paul Morie 2016-02-16 16:02:02 UTC
Actually, disregard my comment re: anyuid SCC.  Specifying FSGroup manually should be enough.

Comment 22 Qixuan Wang 2016-02-17 09:41:27 UTC
Used pod yaml file which was specified fsGroup in Comment 13 and got results in Comment 19. The only reason is I used previous openshift locally due to my stupid script didn't check it's failed update. I apologize for my negligence. The bug has been fixed. Thanks again!

Comment 23 Fred van Zwieten 2016-04-06 10:00:13 UTC
Is this fix part of the currently released Atomic Host

Comment 25 errata-xmlrpc 2016-05-12 16:24:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:1064


Note You need to log in before you can comment on or make changes to this bug.