Hide Forgot
Description of problem: Running `docker cp imagename:/path/to/file /path/to/local/file` results in /sys, /proc, /dev and several other critical mount points being lost from the host operating system. Once this happens, only a hard-reboot of the system is possible (since access to systemd for a clean reboot is unavailable). The mount points are restored at boot, but until the machine is rebooted, it is essentially entirely unable to function. Version-Release number of selected component (if applicable): docker-1.10.3-4.gitf8a9a2a.fc24.x86_64 How reproducible: Every time Steps to Reproduce: 1. Install docker 2. `systemctl start docker.service` 3. From a terminal: sudo docker run --name c7 -it --privileged --net=host --pid=host \ -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v \ /var/lib/docker:/var/lib/docker:rw centos:centos7 bash 4. From a separate terminal: sudo docker cp c7:/etc/centos-release /tmp/centos-release 5. Once `docker cp` returns: `ls /proc` Actual results: /proc, /sys and /dev are all empty and most actions on the host are now impossible. Expected results: The host operating system should not be affected. Additional info: Upstream bug: https://github.com/docker/docker/issues/20670 The upstream bug suggests the workaround of adding `MountFlags=slave` to the [Service] section of the docker.service file. This has the effect of preventing the docker containers from propagating mount/unmount activity up to the container host. I don't have sufficient knowledge to comment on what performance and/or functionality effects this might have.
Vivek, any ideas what is going on here?
There are lots of calls to container.UnmountVolumes in https://github.com/docker/docker/blob/master/daemon/archive.go... I'd start looking there perhaps?
Note, this is blocking hack/test-end-to-end-docker.sh in OpenShift.
Well you can add the slave mount to docker.service, but this will block being able to do docker run -v /source/target:shared fedora /bin/sh
Right, but that presumably won't work for us. I'm guessing a better option is to stop docker from unmounting bind mounts?
Question is how unmounting in container rootfs leads to unmounting of /proc/ /sys on host. I don't understand that yet.
Is it because e.g /sys is bind mounted from the host into the container? container.UnmountVolumes unmounts all volumes in the container's list of mounts, which includes bind mounts.
Sure, but that should just remove the bind mount and the not source of the bind mount.
For example, bind mount /sys on a dir and unmount then it only unmounts that particular mount point and not source of bind mount. # mkdir /root/sys-dest # mount --bind /sys /root/sys-dest # umount /root/sys-dest Now /sys is still around and has not been unmounted. So something is going on here. Trying to debug it.
I see UnmountVolume() does lazy unmount. And that probably means that all the sub mounts will be unmounted. If those submounts are shared with host then these will be unmounted on host too. So looks like when "docker cp" mounts the volume, we need to always mount them as "private" or "rprivate" so that after copying the files out of container when these mount points are unmounted, they don't destroy host's mount points. I will track down where actual mounting is taking place.
Proposed a fix. https://github.com/docker/docker/pull/22009
Fixes were merged upstream and backported in fedora. This should be fixed now.
Lokesh we need a new version of docker with these fixes.
The docker(s) I built yesterday contains the fix for this.
What version? Mark this bug as modified by that verison.
Fixed into docker-1.10.3-6.git964eda6.fc24 could you please test it out? Thx
This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component.