Bug 1339146
Summary: | [Container Installation only][docker1.10] Downward api volume can not work with docker 1.10 | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | weiwei jiang <wjiang> |
Component: | docker | Assignee: | Antonio Murdaca <amurdaca> |
Status: | CLOSED ERRATA | QA Contact: | atomic-bugs <atomic-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.2 | CC: | agoldste, amurdaca, aos-bugs, avagarwa, dwalsh, eparis, jokerman, lfriedma, lsm5, lsu, mjenner, mmccomas, mnewby, vgoyal, wjiang, wsun |
Target Milestone: | rc | Keywords: | Extras |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-06-23 16:18:51 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
weiwei jiang
2016-05-24 08:55:33 UTC
If you try a mountPath other than /etc, maybe /foo, does that work? (In reply to Andy Goldstein from comment #1) > If you try a mountPath other than /etc, maybe /foo, does that work? Not work seems. oc exec -it kubernetes-metadata-volume-example sh / $ ls / bin foo lib64 mnt root sys var dev home linuxrc opt run tmp etc lib media proc sbin usr / $ ls -laR /foo /foo: total 4 drwxrwxrwt 2 root root 40 May 25 08:07 . drwxr-xr-x 18 root root 4096 May 25 08:07 .. And have checked that mountpath have no downwardapi content. # ls -laR /var/lib/origin/openshift.local.volumes/pods/b5a70a15-224f-11e6-b8f1-0eb214756b7f/volumes/kubernetes.io~downward-api/podinfo /var/lib/origin/openshift.local.volumes/pods/b5a70a15-224f-11e6-b8f1-0eb214756b7f/volumes/kubernetes.io~downward-api/podinfo: total 0 drwxrwxrwt. 2 root root 40 May 25 04:07 . drwxr-xr-x. 3 root root 20 May 25 04:07 .. I am not able to reproduce this with docker-1.10.3 on f23 with latest kube (master head). Here are details on f23: #rpm -qa docker docker-1.10.3-20.git8ecd47f.fc23.x86_64 # cat ~/data-json-yaml-files/volume-pod-2.yaml apiVersion: v1 kind: Pod metadata: name: kubernetes-downwardapi-volume-example labels: zone: us-est-coast cluster: test-cluster1 rack: rack-22 annotations: build: two builder: john-doe spec: containers: - name: client-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "while true; do if [[ -e /etc/labels ]]; then cat /etc/labels; fi; if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"] volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "annotations" fieldRef: fieldPath: metadata.annotations ## docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 39d849fda4f4 gcr.io/google_containers/busybox "sh -c 'while true; d" 2 minutes ago Up 2 minutes k8s_client-container.49f16a8a_kubernetes-downwardapi-volume-example_default_c3fe2a36-2298-11e6-a4b1-5254009d44b2_bcb2732b 663fcdf3e366 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD.d8dbe16c_kubernetes-downwardapi-volume-example_default_c3fe2a36-2298-11e6-a4b1-5254009d44b2_6931fbb7 [root@localhost kubernetes]# docker exec -it 39d849fda4f4 sh / # ls -al total 24 drwxr-xr-x 17 0 0 4096 May 25 16:50 . drwxr-xr-x 17 0 0 4096 May 25 16:50 .. -rw------- 1 0 0 63 May 25 16:53 .ash_history -rwxr-xr-x 1 0 0 0 May 25 16:50 .dockerenv -rwxr-xr-x 1 0 0 0 May 25 16:50 .dockerinit drwxrwxr-x 2 0 0 4096 May 22 2014 bin drwxr-xr-x 5 0 0 380 May 25 16:50 dev drwxrwxrwt 3 0 0 120 May 25 16:50 etc drwxrwxr-x 4 0 0 30 May 22 2014 home drwxrwxr-x 2 0 0 4096 May 22 2014 lib lrwxrwxrwx 1 0 0 3 May 22 2014 lib64 -> lib lrwxrwxrwx 1 0 0 11 May 22 2014 linuxrc -> bin/busybox drwxrwxr-x 2 0 0 6 Feb 27 2014 media drwxrwxr-x 2 0 0 6 Feb 27 2014 mnt drwxrwxr-x 2 0 0 6 Feb 27 2014 opt dr-xr-xr-x 288 0 0 0 May 25 16:50 proc drwx------ 2 0 0 65 Feb 27 2014 root lrwxrwxrwx 1 0 0 3 Feb 27 2014 run -> tmp drwxr-xr-x 2 0 0 4096 May 22 2014 sbin dr-xr-xr-x 13 0 0 0 May 13 02:03 sys drwxrwxrwt 4 0 0 35 May 25 16:50 tmp drwxrwxr-x 6 0 0 61 May 22 2014 usr drwxrwxr-x 4 0 0 104 May 22 2014 var / # cd etc/ /etc # ls -al /etc/ total 4 drwxrwxrwt 3 0 0 120 May 25 16:50 . drwxr-xr-x 17 0 0 4096 May 25 16:50 .. drwxr-xr-x 2 0 0 80 May 25 16:50 ..5985_25_05_12_50_23.181248998 lrwxrwxrwx 1 0 0 31 May 25 16:50 ..data -> ..5985_25_05_12_50_23.181248998 lrwxrwxrwx 1 0 0 18 May 25 16:50 annotations -> ..data/annotations lrwxrwxrwx 1 0 0 13 May 25 16:50 labels -> ..data/labels Just to be clear, the above observation was on non-containerized installation. I can reproduce this with OSE containerized with Docker 1.10. It works with Docker 1.9. I also tested the way Andy tested and I have the same experience that it works with 1.9 but not with docker-latest-1.10.3-22.el7.x86_64. So the bug is reproducible. Please ignore my earlier comment as that was on non-containerized install. The issue with 1.10 is that /usr/lib/systemd/system/docker-latest.service is missing MountFlags=slave. It's in /usr/lib/systemd/system/docker.service. Lokesh, can you fix this? @runcom On IRC we had a conversation and looks like openshift is relying on old behavior of mounting everything "slave" by default. While we backported my volume mount propagation patch in 1.10 and got inline with upstream default behavior of everything being "private" by default. Looks like this is blocker for openshift team. Is it possible to change the default behavior of 1.10. Dan Walsh, do you have any concerns? I am hoping that by openshift 3.3, this dependency has been resolved and we don't have to carry that patch in future versions of docker. An alternative fix is to modify the way we bind mount /var/lib/origin/openshift.local.volumes from the host to the ose node container. If we append :slave or :shared, that also fixes this bug. Yes I prefer to move to :slave rather then carry a patch. (In reply to Andy Goldstein from comment #10) > An alternative fix is to modify the way we bind mount > /var/lib/origin/openshift.local.volumes from the host to the ose node > container. If we append :slave or :shared, that also fixes this bug. Andy, either way works for me, I could add the MountFlags=slave in docker-latest unitfile. Let me know which way to proceed. I think :slave|:shared is better to not move that far from upstream I don't think we should add the MountFlags=slave, but we could patch docker to default to slave mounting, which it is in docker-1.9. In docker-1.10 it is currently Private mounting, which is the upstream default. If openshift can specify :slave suffix in their volume mounting, that would be the best as we don't have to move away from upstream. Also it is safer default as unintentional mounts on host will not leak into container. Otherwise we will have to carry patch in docker to have slave propagation for all mounts. I don't think Andy is right, as we already shipped and can't get in our way-back machine to add it. Nor do I believe that :slave suffix is valid on 1.9, right? So how would we know if we can/should use it? (Same problem we are suffering with libseccomp .....) Remember, the hope here is to update docker underneath openshift. Not to force openshift to have to update. I get why the team doesn't like it, but "this worked yesterday and now it doesn't." I think that's the definition of a regression. We'll work in 3.3 to follow the, now known, deprecation hope you guys have. I know we had some irc chats about this, but I at least want to record this here for posterity :-) To fix this specific bug, where the downward api files aren't visible in pods using docker 1.10 and ose 3.2, we can make a change to the openshift-ansible playbook to modify the systemd unit file that runs the containerized atomic-openshift-node. We can append :slave to the bind mount for the volume directory and things will work. As soon as pmorie is online today, I will have him weigh in on this bz as well. Eric, I thought that docker 1.10 will be used only for upcoming version of openshift (3.2) and not already shipped version of openshift. Is that's not the case? If yes, then in theory we still have the opportunity to modify openshift? That's a different thing that change might not be small or too risky to do at this stage. 3.2 is already shipped. Next chance to update will be 3.3. Antonio does the current docker build have the fixed version of rprivate? It does assuming the current docker build has been built from the last commit of rhel7-1.10.3 branch of projectatomic/docker (In reply to Antonio Murdaca from comment #25) > It does assuming the current docker build has been built from the last > commit of rhel7-1.10.3 branch of projectatomic/docker the current docker package uses the commit 47792252c76d4f10ae06795e91a982874ee02e8d (In reply to Lokesh Mandvekar from comment #29) > (In reply to Antonio Murdaca from comment #25) > > It does assuming the current docker build has been built from the last > > commit of rhel7-1.10.3 branch of projectatomic/docker > > the current docker package uses the commit > 47792252c76d4f10ae06795e91a982874ee02e8d which is the latest on rhel7-1.10.3 Lokesh are we building with golang 1.6? And have we removed the docker-forwarder code? Do you mean forward-journald? It's still being used in both docker and docker-latest. We're still on golang 1.4.2 on rhel 7 rslave change is in docker 1.10.3 built from rhel7-1.10.3 commit 47792252c76d4f10ae06795e91a982874ee02e8d It's included in -28 docker build in RHEL, Andy can you test it out? Anotonio, I am going to test it and will let you know. Hi Antonio, It is working: I think its also to ok to test with -31 rpm -qa|grep docker python-docker-py-1.7.2-1.el7.noarch docker-forward-journald-1.10.3-31.el7.x86_64 python-dockerfile-parse-0.0.5-1.el7eng.noarch docker-selinux-1.10.3-31.el7.x86_64 docker-rhel-push-plugin-1.10.3-31.el7.x86_64 docker-common-1.10.3-31.el7.x86_64 docker-1.10.3-31.el7.x86_64 docker-v1.10-migrator-1.10.3-31.el7.x86_64 I started openshift as follows: docker run -d --name "osetest" --privileged --pid=host --net=host -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes openshift3/ose:v3.2.0.44 start And then created the pod as above and I am to see /etc/labels and /etc/annotations correctly. Great, thanks for checking Per comment#35, move to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1274 |