RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1339146 - [Container Installation only][docker1.10] Downward api volume can not work with docker 1.10
Summary: [Container Installation only][docker1.10] Downward api volume can not work wi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker
Version: 7.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Antonio Murdaca
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-24 08:55 UTC by weiwei jiang
Modified: 2019-03-06 02:13 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-23 16:18:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1274 0 normal SHIPPED_LIVE docker bug fix and enhancement update 2016-06-23 20:12:28 UTC

Description weiwei jiang 2016-05-24 08:55:33 UTC
Description of problem:
When create pod with downward api volume, then the pod fields will not mount to pod.
 oc exec -it kubernetes-metadata-volume-example sh
/ $ ls -laR /etc/
/etc/:
total 16
drwxrwxrwt    2 0        0              100 May 24 08:15 .
drwxr-xr-x   17 0        0             4096 May 24 08:15 ..
-rw-r--r--    1 0        0               35 May 24 08:15 hostname
-rw-r--r--    1 0        0              227 May 24 08:15 hosts
-rw-r--r--    1 0        0              140 May 24 08:15 resolv.conf
/ $ exit


Version-Release number of selected component (if applicable):
# openshift version 
openshift v3.2.0.44
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5

How reproducible:
always

Steps to Reproduce:
1. Create a pod with downward api volume
oc create -f <(echo 'kind: Pod
apiVersion: v1beta3
id: kurbernetes-metadata-volume-plugin
metadata:
  labels:
    zone: us-est-coast
    cluster: test-cluster1
    rack: rack-22
  name: kubernetes-metadata-volume-example
  annotations:
    build: two
    builder: john-doe
spec:
  containers:
    - name: client-container
      image: gcr.io/google_containers/busybox
      command: ["sh", "-c", "while true; do if [[ -e /etc/labels ]]; then cat /etc/labels; fi; if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"]
      volumeMounts:
        - name: podinfo
          mountPath: /etc
          readOnly: false
  volumes:
    - name: podinfo
      metadata:
        items:
          - name: "labels"
            fieldRef:
              fieldPath: metadata.labels
          - name: "annotations"
            fieldRef:
              fieldPath: metadata.annotations')
2.Check the volumes for the pod after the pod is ready
3.

Actual results:
2. oc exec -it kubernetes-metadata-volume-example sh
/ $ ls -laR /etc/
/etc/:
total 16
drwxrwxrwt    2 0        0              100 May 24 08:15 .
drwxr-xr-x   17 0        0             4096 May 24 08:15 ..
-rw-r--r--    1 0        0               35 May 24 08:15 hostname
-rw-r--r--    1 0        0              227 May 24 08:15 hosts
-rw-r--r--    1 0        0              140 May 24 08:15 resolv.conf
/ $ exit


Expected results:
oc exec -it kubernetes-metadata-volume-example sh
/ $ ls -laR /etc
/etc:
total 16
drwxrwsrwt    3 0        10000800       180 May 24 08:13 .
drwxr-xr-x   17 0        0             4096 May 24 08:14 ..
drwxrwsrwx    2 0        10000800        80 May 24 08:13 ..2016_05_24_04_13_46029343728
lrwxrwxrwx    1 0        10000800        30 May 24 08:13 ..downwardapi -> ..2016_05_24_04_13_46029343728
lrwxrwxrwx    1 0        0               25 May 24 07:50 annotations -> ..downwardapi/annotations
-rw-r--r--    1 0        0               35 May 24 08:14 hostname
-rw-r--r--    1 0        0              227 May 24 08:14 hosts
lrwxrwxrwx    1 0        0               20 May 24 07:50 labels -> ..downwardapi/labels
-rw-r--r--    1 0        0              139 May 24 08:14 resolv.conf

/etc/..2016_05_24_04_13_46029343728:
total 8
drwxrwsrwx    2 0        10000800        80 May 24 08:13 .
drwxrwsrwt    3 0        10000800       180 May 24 08:13 ..
-rwxrwsrwx    1 0        10000800       159 May 24 08:13 annotations
-rwxrwsrwx    1 0        10000800        59 May 24 08:13 labels


Additional info:
docker -1.9 can not reproduce this issue

Comment 1 Andy Goldstein 2016-05-24 14:45:26 UTC
If you try a mountPath other than /etc, maybe /foo, does that work?

Comment 2 weiwei jiang 2016-05-25 08:12:44 UTC
(In reply to Andy Goldstein from comment #1)
> If you try a mountPath other than /etc, maybe /foo, does that work?

Not work seems.

oc exec -it kubernetes-metadata-volume-example sh
/ $ ls /
bin      foo      lib64    mnt      root     sys      var
dev      home     linuxrc  opt      run      tmp
etc      lib      media    proc     sbin     usr
/ $ ls -laR /foo
/foo:
total 4
drwxrwxrwt    2 root     root            40 May 25 08:07 .
drwxr-xr-x   18 root     root          4096 May 25 08:07 ..


And have checked that mountpath have no downwardapi content.
# ls -laR /var/lib/origin/openshift.local.volumes/pods/b5a70a15-224f-11e6-b8f1-0eb214756b7f/volumes/kubernetes.io~downward-api/podinfo
/var/lib/origin/openshift.local.volumes/pods/b5a70a15-224f-11e6-b8f1-0eb214756b7f/volumes/kubernetes.io~downward-api/podinfo:
total 0
drwxrwxrwt. 2 root root 40 May 25 04:07 .
drwxr-xr-x. 3 root root 20 May 25 04:07 ..

Comment 3 Avesh Agarwal 2016-05-25 16:54:39 UTC
I am not able to reproduce this with docker-1.10.3 on f23 with latest kube (master head). Here are details on f23:

#rpm -qa docker
docker-1.10.3-20.git8ecd47f.fc23.x86_64

# cat ~/data-json-yaml-files/volume-pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubernetes-downwardapi-volume-example
  labels:
    zone: us-est-coast
    cluster: test-cluster1
    rack: rack-22
  annotations:
    build: two
    builder: john-doe
spec:
  containers:
    - name: client-container
      image: gcr.io/google_containers/busybox
      command: ["sh", "-c", "while true; do if [[ -e /etc/labels ]]; then cat /etc/labels; fi; if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"]
      volumeMounts:
        - name: podinfo
          mountPath: /etc
          readOnly: false
  volumes:
    - name: podinfo
      downwardAPI:
        items:
          - path: "labels"
            fieldRef:
              fieldPath: metadata.labels
          - path: "annotations"
            fieldRef:
              fieldPath: metadata.annotations


## docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
39d849fda4f4        gcr.io/google_containers/busybox           "sh -c 'while true; d"   2 minutes ago       Up 2 minutes                            k8s_client-container.49f16a8a_kubernetes-downwardapi-volume-example_default_c3fe2a36-2298-11e6-a4b1-5254009d44b2_bcb2732b
663fcdf3e366        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 2 minutes ago       Up 2 minutes                            k8s_POD.d8dbe16c_kubernetes-downwardapi-volume-example_default_c3fe2a36-2298-11e6-a4b1-5254009d44b2_6931fbb7
[root@localhost kubernetes]# docker exec -it 39d849fda4f4 sh
/ # ls -al
total 24
drwxr-xr-x   17 0        0             4096 May 25 16:50 .
drwxr-xr-x   17 0        0             4096 May 25 16:50 ..
-rw-------    1 0        0               63 May 25 16:53 .ash_history
-rwxr-xr-x    1 0        0                0 May 25 16:50 .dockerenv
-rwxr-xr-x    1 0        0                0 May 25 16:50 .dockerinit
drwxrwxr-x    2 0        0             4096 May 22  2014 bin
drwxr-xr-x    5 0        0              380 May 25 16:50 dev
drwxrwxrwt    3 0        0              120 May 25 16:50 etc
drwxrwxr-x    4 0        0               30 May 22  2014 home
drwxrwxr-x    2 0        0             4096 May 22  2014 lib
lrwxrwxrwx    1 0        0                3 May 22  2014 lib64 -> lib
lrwxrwxrwx    1 0        0               11 May 22  2014 linuxrc -> bin/busybox
drwxrwxr-x    2 0        0                6 Feb 27  2014 media
drwxrwxr-x    2 0        0                6 Feb 27  2014 mnt
drwxrwxr-x    2 0        0                6 Feb 27  2014 opt
dr-xr-xr-x  288 0        0                0 May 25 16:50 proc
drwx------    2 0        0               65 Feb 27  2014 root
lrwxrwxrwx    1 0        0                3 Feb 27  2014 run -> tmp
drwxr-xr-x    2 0        0             4096 May 22  2014 sbin
dr-xr-xr-x   13 0        0                0 May 13 02:03 sys
drwxrwxrwt    4 0        0               35 May 25 16:50 tmp
drwxrwxr-x    6 0        0               61 May 22  2014 usr
drwxrwxr-x    4 0        0              104 May 22  2014 var
/ # cd etc/
/etc # ls -al /etc/
total 4
drwxrwxrwt    3 0        0              120 May 25 16:50 .
drwxr-xr-x   17 0        0             4096 May 25 16:50 ..
drwxr-xr-x    2 0        0               80 May 25 16:50 ..5985_25_05_12_50_23.181248998
lrwxrwxrwx    1 0        0               31 May 25 16:50 ..data -> ..5985_25_05_12_50_23.181248998
lrwxrwxrwx    1 0        0               18 May 25 16:50 annotations -> ..data/annotations
lrwxrwxrwx    1 0        0               13 May 25 16:50 labels -> ..data/labels

Comment 4 Avesh Agarwal 2016-05-25 17:02:22 UTC
Just to be clear, the above observation was on non-containerized installation.

Comment 5 Andy Goldstein 2016-05-25 18:24:26 UTC
I can reproduce this with OSE containerized with Docker 1.10. It works with Docker 1.9.

Comment 6 Avesh Agarwal 2016-05-25 19:22:05 UTC
I also tested the way Andy tested and I have the same experience that it works with 1.9 but not with docker-latest-1.10.3-22.el7.x86_64. So the bug is reproducible. Please ignore my earlier comment as that was on non-containerized install.

Comment 7 Andy Goldstein 2016-05-25 19:43:44 UTC
The issue with 1.10 is that /usr/lib/systemd/system/docker-latest.service is missing MountFlags=slave. It's in /usr/lib/systemd/system/docker.service.

Comment 8 Andy Goldstein 2016-05-25 19:47:55 UTC
Lokesh, can you fix this?

Comment 9 Vivek Goyal 2016-05-25 20:41:06 UTC
@runcom On IRC we had a conversation and looks like openshift is relying on old
behavior of mounting everything "slave" by default. While we backported my volume mount propagation patch in 1.10 and got inline with upstream default behavior of everything being "private" by default.

Looks like this is blocker for openshift team. Is it possible to change the default behavior of 1.10.

Dan Walsh, do you have any concerns?

I am hoping that by openshift 3.3, this dependency has been resolved and we don't have to carry that patch in future versions of docker.

Comment 10 Andy Goldstein 2016-05-25 20:44:35 UTC
An alternative fix is to modify the way we bind mount /var/lib/origin/openshift.local.volumes from the host to the ose node container. If we append :slave or :shared, that also fixes this bug.

Comment 11 Daniel Walsh 2016-05-25 21:56:23 UTC
Yes I prefer to move to :slave rather then carry a patch.

Comment 13 Lokesh Mandvekar 2016-05-26 00:15:36 UTC
(In reply to Andy Goldstein from comment #10)
> An alternative fix is to modify the way we bind mount
> /var/lib/origin/openshift.local.volumes from the host to the ose node
> container. If we append :slave or :shared, that also fixes this bug.

Andy, either way works for me, I could add the MountFlags=slave in docker-latest unitfile. Let me know which way to proceed.

Comment 14 Antonio Murdaca 2016-05-26 11:11:14 UTC
I think :slave|:shared is better to not move that far from upstream

Comment 15 Daniel Walsh 2016-05-26 11:47:04 UTC
I don't think we should add the MountFlags=slave, but we could patch docker to default to slave mounting, which it is in docker-1.9.  In docker-1.10 it is currently Private mounting, which is the upstream default.

Comment 16 Vivek Goyal 2016-05-26 12:45:01 UTC
If openshift can specify :slave suffix in their volume mounting, that would be the best as we don't have to move away from upstream. Also it is safer default as unintentional mounts on host will not leak into container.

Otherwise we will have to carry patch in docker to have slave propagation for all mounts.

Comment 17 Eric Paris 2016-05-26 13:05:19 UTC
I don't think Andy is right, as we already shipped and can't get in our way-back machine to add it. Nor do I believe that :slave suffix is valid on 1.9, right? So how would we know if we can/should use it? (Same problem we are suffering with libseccomp .....)

Remember, the hope here is to update docker underneath openshift. Not to force openshift to have to update.

I get why the team doesn't like it, but "this worked yesterday and now it doesn't." I think that's the definition of a regression. We'll work in 3.3 to follow the, now known, deprecation hope you guys have.

Comment 18 Andy Goldstein 2016-05-26 13:38:00 UTC
I know we had some irc chats about this, but I at least want to record this here for posterity :-)

To fix this specific bug, where the downward api files aren't visible in pods using docker 1.10 and ose 3.2, we can make a change to the openshift-ansible playbook to modify the systemd unit file that runs the containerized atomic-openshift-node. We can append :slave to the bind mount for the volume directory and things will work.

As soon as pmorie is online today, I will have him weigh in on this bz as well.

Comment 21 Vivek Goyal 2016-05-26 14:05:30 UTC
Eric, I thought that docker 1.10 will be used only for upcoming version of openshift (3.2) and not already shipped version of openshift. Is that's not the case?

If yes, then in theory we still have the opportunity to modify openshift? That's a different thing that change might not be small or too risky to do at this stage.

Comment 22 Eric Paris 2016-05-26 14:38:11 UTC
3.2 is already shipped. Next chance to update will be 3.3.

Comment 24 Daniel Walsh 2016-06-01 19:28:06 UTC
Antonio does the current docker build have the fixed version of rprivate?

Comment 25 Antonio Murdaca 2016-06-01 19:49:02 UTC
It does assuming the current docker build has been built from the last commit of rhel7-1.10.3 branch of projectatomic/docker

Comment 29 Lokesh Mandvekar 2016-06-02 17:37:43 UTC
(In reply to Antonio Murdaca from comment #25)
> It does assuming the current docker build has been built from the last
> commit of rhel7-1.10.3 branch of projectatomic/docker

the current docker package uses the commit 47792252c76d4f10ae06795e91a982874ee02e8d

Comment 30 Lokesh Mandvekar 2016-06-02 17:38:41 UTC
(In reply to Lokesh Mandvekar from comment #29)
> (In reply to Antonio Murdaca from comment #25)
> > It does assuming the current docker build has been built from the last
> > commit of rhel7-1.10.3 branch of projectatomic/docker
> 
> the current docker package uses the commit
> 47792252c76d4f10ae06795e91a982874ee02e8d

which is the latest on rhel7-1.10.3

Comment 31 Daniel Walsh 2016-06-02 18:40:27 UTC
Lokesh are we building with golang 1.6?  And have we removed the docker-forwarder code?

Comment 32 Lokesh Mandvekar 2016-06-02 22:13:34 UTC
Do you mean forward-journald? It's still being used in both docker and docker-latest. We're still on golang 1.4.2 on rhel 7

Comment 33 Antonio Murdaca 2016-06-03 12:42:24 UTC
rslave change is in docker 1.10.3 built from rhel7-1.10.3 commit 47792252c76d4f10ae06795e91a982874ee02e8d

It's included in -28 docker build in RHEL, Andy can you test it out?

Comment 34 Avesh Agarwal 2016-06-03 14:50:44 UTC
Anotonio, I am going to test it and will let you know.

Comment 35 Avesh Agarwal 2016-06-03 15:27:03 UTC
Hi Antonio,

It is working: I think its also to ok to test with -31

rpm -qa|grep docker
python-docker-py-1.7.2-1.el7.noarch
docker-forward-journald-1.10.3-31.el7.x86_64
python-dockerfile-parse-0.0.5-1.el7eng.noarch
docker-selinux-1.10.3-31.el7.x86_64
docker-rhel-push-plugin-1.10.3-31.el7.x86_64
docker-common-1.10.3-31.el7.x86_64
docker-1.10.3-31.el7.x86_64
docker-v1.10-migrator-1.10.3-31.el7.x86_64

I started openshift as follows:

docker run -d --name "osetest"  --privileged --pid=host --net=host         -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes  openshift3/ose:v3.2.0.44 start

And then created the pod as above and I am to see /etc/labels and /etc/annotations correctly.

Comment 36 Antonio Murdaca 2016-06-03 15:29:49 UTC
Great, thanks for checking

Comment 38 Luwen Su 2016-06-12 10:16:02 UTC
Per comment#35, move to verified

Comment 40 errata-xmlrpc 2016-06-23 16:18:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1274


Note You need to log in before you can comment on or make changes to this bug.