Bug 1467824
Summary: | master api/controllers service in containerized install can not be restarted with docker-1.12.6-40. | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Johnny Liu <jialiu> | |
Component: | Installer | Assignee: | Scott Dodson <sdodson> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Johnny Liu <jialiu> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 3.6.0 | CC: | abutcher, akostadi, amurdaca, aos-bugs, dwalsh, imcleod, jialiu, jligon, jokerman, mifiedle, mmccomas, sdodson, sjenning, slopezpa, vgoyal | |
Target Milestone: | --- | |||
Target Release: | 3.6.z | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1470261 1470389 (view as bug list) | Environment: | ||
Last Closed: | 2017-08-14 15:41:13 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1468244 | |||
Bug Blocks: | 1470389 |
Description
Johnny Liu
2017-07-05 08:56:12 UTC
So a container removal failed because device is busy in some other mount namespace. And I think system tried to create another instance of container with same name which failed saying container name already exists. So openshift first needs to make sure previous container got deleted. And if deletion failed because device is busy, figure out where is mount point leaking and how. You can try running following script to figure out where all container mount point was mounted. ./find-busy-mnt.sh 209fa04d9a38f1 You can find this script here. https://github.com/rhvgoyal/misc/blob/master/find-busy-mnt.sh BTW, I think problem has already been around. In the past forced container removal will remove container anyway, even if graph driver failed to remove it. And that will result in leaked storage. Now upstream has changed the behavior and container removal fails if graph driver failed to remove container. So that means container name can not be reused if container removal failed. And this problem became visible. QE, can you provide me the access to system which is in this state. I want to look around a bit. As usual, I need an engineer from openshift team to break it down for me and tell me what these two services do and how container creation happens and possibly help with what options these are run with. And that might help determine how mount points are leaking. atomic-openshift-master-api atomic-openshift-master-controllers Scott? This issue has most likely come to surface due to following commit. This was backported recently from upstream. I think this is right thing to do. If there is an error in container removal, it should be sent to caller instead of masking the error and leaving all sorts of bad state/resources behind in docker which will create problems in future. commit afdec061eb47e6bd602654cc1e996e674949a9c3 Author: Sergio Lopez <slp> Date: Thu Jun 22 15:25:40 2017 +0200 BACKPORT: Do not remove containers from memory on error Upstream: https://github.com/moby/moby/commit/54dcbab25ea4771da303fa95e0c26f2d39487b49 Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1463534 https://bugzilla.redhat.com/show_bug.cgi?id=1460728 Vivek, https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-api.service.j2 https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-controllers.service.j2 Are the service definitions which lack the container cleanup as you've noted. I'm left wondering how this ever worked. -- Scott Scott, They seem to be using "force" removal of container. Which was supposed to be a developer option only for debugging and not a production thing. If you start using it in production, that means you are leaving all sort of resources behind which will never be freed up. Enough people misused this option and then complained about thin pool being full that docker as not removed this option of being able to force remove a container. A container now will be removed only if it can be cleanly removed. Otherwise user will have to debug why container can't be removed. Yes, sorry, I missed the ExecStartPre. So what's the suggested remedy? we're removing it via force to handle scenarios where it may not have been shut down cleanly. So we need to remove it, how do we make sure that's more successful? For example, ExecStartPre is doing a force removal of container (notice -f). Trying to clean it up. ExecStartPre=-/usr/bin/docker rm -f {{ openshift.common.service_type}}-master-api And then it launches container with option "--rm" which uses "-f" internally. ExecStart=/usr/bin/docker run --rm --privileged --net=host ....... It worked in the past (and left mess behind in docker and unclaimable space in thin pool) but will not work going forward. So real issue here is to figure out why container deletion failed and who is keeping device busy. And fix that. In the past we ignored it and moved on. Can't ignore it any more. (In reply to Scott Dodson from comment #9) > Yes, sorry, I missed the ExecStartPre. So what's the suggested remedy? we're > removing it via force to handle scenarios where it may not have been shut > down cleanly. So we need to remove it, how do we make sure that's more > successful? Error message suggests that something is keeping container device/mount point busy. We need to figure out who is keeping it busy and why. I suggested my script find-busy-mnt.sh as a starting point. If I can get the system which is experiencing this issue, I would like to have a look. Can somebody please also provide "docker info" output. Reproduced with a controllers restart. Jul 05 15:48:51 master1.abutcher.com systemd[1]: Starting Atomic OpenShift Master Controllers... Jul 05 15:48:52 master1.abutcher.com atomic-openshift-master-controllers[4412]: Error response from daemon: Driver devicemapper failed to remove root filesystem d0301573d9c467191de9a9927fcb1b3b7be911726c49ab535a77b1d6e076b277: remove /var/lib/docker/devicemapper/mnt/25c1debb83af5d9 Jul 05 15:48:52 master1.abutcher.com atomic-openshift-master-controllers[4420]: /usr/bin/docker-current: Error response from daemon: Conflict. The name "/atomic-openshift-master-controllers" is already in use by container d0301573d9c467191de9a9927fcb1b3b7be911726c49ab535a77b1d6e076 Jul 05 15:48:52 master1.abutcher.com atomic-openshift-master-controllers[4420]: See '/usr/bin/docker-current run --help'. Jul 05 15:48:52 master1.abutcher.com systemd[1]: atomic-openshift-master-controllers.service: main process exited, code=exited, status=125/n/a [root@master1 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d0301573d9c4 openshift3/ose:v3.6.134 "/usr/bin/openshift s" 14 minutes ago Dead atomic-openshift-master-controllers [root@master1 ~]# docker info Containers: 9 Running: 4 Paused: 0 Stopped: 5 Images: 6 Server Version: 1.12.6 Storage Driver: devicemapper Pool Name: docker-253:0-22057-pool Pool Blocksize: 65.54 kB Base Device Size: 10.74 GB Backing Filesystem: xfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 4.767 GB Data Space Total: 107.4 GB Data Space Available: 4.53 GB Metadata Space Used: 5.267 MB Metadata Space Total: 2.147 GB Metadata Space Available: 2.142 GB Thin Pool Minimum Free Space: 10.74 GB Udev Sync Supported: true Deferred Removal Enabled: true Deferred Deletion Enabled: true Deferred Deleted Device Count: 1 Data loop file: /var/lib/docker/devicemapper/devicemapper/data WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device. Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02.135-RHEL7 (2016-11-16) Logging Driver: journald Cgroup Driver: systemd Plugins: Volume: local Network: null bridge overlay host Authorization: rhel-push-plugin Swarm: inactive Runtimes: docker-runc runc Default Runtime: docker-runc Security Options: seccomp selinux Kernel Version: 3.10.0-514.16.1.el7.x86_64 Operating System: Employee SKU OSType: linux Architecture: x86_64 Number of Docker Hooks: 2 CPUs: 1 Total Memory: 1.796 GiB Name: master1.abutcher.com ID: QY6W:IQ7I:YQGM:DQMG:F2D6:YWQW:ZGRD:AFQZ:PPGL:N7YO:XMF3:CZ4X Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false No Proxy: .cluster.local,.svc,master1.abutcher.com Registry: https://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/v1/ WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled Insecure Registries: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888 127.0.0.0/8 Registries: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888 (insecure), registry.access.redhat.com (secure), registry.access.redhat.com (secure), docker.io (secure) [root@master1 ~]# ./find-busy-mnt.sh d0301573d9c4 PID NAME MNTNS 2222 openshift mnt:[4026532316] 2222 openshift mnt:[4026532316] 2222 openshift mnt:[4026532316] 2275 journalctl mnt:[4026532316] 2275 journalctl mnt:[4026532316] 2275 journalctl mnt:[4026532316] 2222: /usr/bin/openshift start node --config=/etc/origin/node/node-config.yaml --loglevel=2 2275: journalctl -k -f Andrew, can you give me access to the system where you see the issue. I want to look at lot of things. Few more follow up questions I have are. - Assuming this is rhel7.4 kernel? - Is /proc/sys/fs/may_detach_mounts set to 1 or 0 - Is docker daemon running in host mount namespace or in a slave mount namespace (MountFlags=slave in docker.service) I am assuming that these two processes "openshift" and "journalctl" are part of atomic-openshift-master-controllers? Can somebody confirm it. If yes, question is why these two process are still running. I mean why container stop did not stop these processes, so that they release container rootfs and now container can be removed. Can somebody try to stop master controllers service and see if it actually stops containers and its processes. Stop controllers, then start it successfully, no such issue, but if restart it directly, will encounter such issue. [root@openshift-137 ~]# docker ps -a|grep controllers b21fd5639f19 openshift3/ose:v3.6.133 "/usr/bin/openshift s" 19 seconds ago Up 17 seconds atomic-openshift-master-controllers [root@openshift-137 ~]# sh find-busy-mnt.sh b21fd5639f19 PID NAME MNTNS 24542 dockerd-current mnt:[4026532134] 24549 docker-containe mnt:[4026532134] 24743 docker-containe mnt:[4026532134] 24842 docker-containe mnt:[4026532134] 24877 docker-containe mnt:[4026532134] 25083 docker-containe mnt:[4026532134] 25116 docker-containe mnt:[4026532134] [root@openshift-137 ~]# service atomic-openshift-master-controllers stop Redirecting to /bin/systemctl stop atomic-openshift-master-controllers.service [root@openshift-137 ~]# sh find-busy-mnt.sh b21fd5639f19 No pids found [root@openshift-137 ~]# docker ps -a|grep controllers [root@openshift-137 ~]# service atomic-openshift-master-controllers start Redirecting to /bin/systemctl start atomic-openshift-master-controllers.service [root@openshift-137 ~]# docker ps -a|grep controllers 04962992cc5a openshift3/ose:v3.6.133 "/usr/bin/openshift s" 16 seconds ago Up 15 seconds atomic-openshift-master-controllers [root@openshift-137 ~]# sh find-busy-mnt.sh 04962992cc5a PID NAME MNTNS 24542 dockerd-current mnt:[4026532134] 24549 docker-containe mnt:[4026532134] 24743 docker-containe mnt:[4026532134] 24842 docker-containe mnt:[4026532134] 24877 docker-containe mnt:[4026532134] 25083 docker-containe mnt:[4026532134] 25774 docker-containe mnt:[4026532134] I think this issue is related to mount points leaking due to usage of "-v /:/rootfs" option. For example, atomic-openshift-node is using this and it will see other container's rootfs mounts. oci-umount should fix it. But it is not being installed by default. I have opened a bug to install oci-umount by default. https://bugzilla.redhat.com/show_bug.cgi?id=1468244 Even after that, somehow on this system oci-umount is not working. I see following error message. Jul 06 09:51:58 openshift-137.lab.sjc.redhat.com oci-umount[9745]: umounthook <info>: Could not find mapping for mount [/var/lib/docker/devicemapper] from host to conatiner. Skipping. It can't figure out that /var/lib/docker/devicemapper on host is mounted on <container-root>/rootfs/var/lib/docker inside container. Need to debug why that's the case. I built latest oci-umount from upstream and that seems to work in the sense it is able to figure out that /var/lib/docker/devicemapper maps to /rootfs/var/lib/docker/devicemapper inside container. So we will need to rebuild docker package with latest oci-umount from upstream. I pinged lokesh about it already. Still it does not work for atomic-openshift-node container. And I think reason being that additional volume mount (-v /var/lib/docker:/var/lib/docker) seems to keep mount point busy. IOW, if I do. docker run -ti -v /:/rootfs fedora bash oci-umount is working. But if I do docker run -ti -v /:/rootfs -v /var/lib/docker:/var/lib/docker it is not working. May be it is getting confused that now /var/lib/docker/devicemapper is actually visible at two places inside container. That is, /rootfs/var/lib/docker/devicemapper and /var/lib/docker/devicemapper. Will look into it. BTW, why do we need to volume mount /var/lib/docker inside container? CC eparis. I'm not certain, perhaps to monitor storage usage? It's been in there for the past 1.5yrs though. We could test without it. The node needs to mount /var/lib/docker in order to calculate container storage usage. Scott, can you give some more details. What are the files node is looking at? oci-umount will unmount /var/lib/docker/devicemapper and /var/lib/docker/containers and it will be broken? rootfs of other containers is leaking into node container and that fails removal of other containers. I think whatever data you need, you will have to go through docker api (docker info or docker inspect). You can't expect /var/lib/docker/devicemapper or /var/lib/docker/containers to be mounted inside your container. Did we try to request docker api to get to the data required (instead of poking at docker internal metadata directly). Vivek, Kubernetes uses cadvisor to gather disk usage stats on a root/image filesysytem basis and the writable layer on a per-container basis. It does not use the docker API to figure out the size of the writable layer. Does the docker API provide that information? (In reply to Seth Jennings from comment #23) > Vivek, > > Kubernetes uses cadvisor to gather disk usage stats on a root/image > filesysytem basis and the writable layer on a per-container basis. It does > not use the docker API to figure out the size of the writable layer. Does > the docker API provide that information? Seth, I believe "docker ps -s" gives layer size. But there might not be a way to specify for a specific container and that's why it probably is very slow. So what size you are trying to look at? Layer size (changes made by container). I believe that's what "-s" provides. And how do you determine that just by looking at container metadata? You probably are running some tools (df), on container mountpoint? And that itself is racy w.r.t container removal. Anyway, another (less preferred) option is to rely on new kernel functionality to forcibly remove mounts from other mount namespaces when mountpoint directory is removed. This is how it will work. - It will need 7.4 kernel - It will require /proc/sys/fs/may_detach_mounts to be 1 - It will require deferred device removal and deferred device deletion to be turned on. Now when a container is removed, its device will be deferred deleted (despite it being busy). And then we will unmount container rootfs on host and remove that directory and that will remove leaked mount points as well. (Except for the case if container process was inside this mount point which is being removed). This will not work on 7.3 kernel though. So using oci-umount is more generic and will work both on 7.3 and 7.4 kernels as long as we can figure out how to not poke at docker metadata directly. Ian, seth, derek and me had some conversations about this issue. oci-umount takes away the /var/lib/docker/devicemapper and /var/lib/docker/overlay2 and /var/lib/docker/containers mount points away from container. And there was concern that cadvisor disk stats feature might be broken. Seth and derek said that disk stats feature as of now is supposed to work only with overlay graph driver. And looks like that will also be currently broken in containerized environment as mount points under /var/lib/docker/overlay2 don't propagate. A cadvisor container will only see the mount points at the time of start of container and not the mount point of containers launched later. IOW, in containerized environment disk stat feature is probably broken on overlay2 also. So idea was that do not volume mount /var/lib/docker inside container. (-v /var/lib/docker:/var/lib/docker) and this most likely should be fine. They could not remember anything else being dependent on this. Can we test latest docker (docker-1.12.6-41.1.gitf55a118.el7) with /var/lib/docker/ volume mount removed from /etc/systemd/system/atomic-openshift-node.service file and see if problem is fixed? I am not sure who owns /etc/systemd/system/atomic-openshift-node.service file. They will have to make appropriate changes if this does fix the issue. QE, please do following steps. - Install/upgrade to docker -41 (docker-1.12.6-41.1.gitf55a118.el7) - Edit /etc/systemd/system/atomic-openshift-node.service file and remove string "-v /var/lib/docker:/var/lib/docker" - systemctl daemon-reload - systemctl start atomic-openshift-node - systemctl restart atomic-openshift-master-api If this works, then we are in good shape. (In reply to Vivek Goyal from comment #28) > QE, please do following steps. > > - Install/upgrade to docker -41 (docker-1.12.6-41.1.gitf55a118.el7) > - Edit /etc/systemd/system/atomic-openshift-node.service file and remove > string "-v /var/lib/docker:/var/lib/docker" > - systemctl daemon-reload > - systemctl start atomic-openshift-node > - systemctl restart atomic-openshift-master-api > > If this works, then we are in good shape. Just like what you mentioned in your above comments, after removing string "-v /var/lib/docker:/var/lib/docker", master api/controllers service is restarted successfully even with docker -40. So my question is: 1). if removing string "-v /var/lib/docker:/var/lib/docker", why need update docker version. 2). if remove string "-v /var/lib/docker:/var/lib/docker", how to resolve comment #c23, it is related to openshift functionality. And I have a new finding: if /proc/sys/fs/may_detach_mounts is set to 1, even with docker -40, not removing "-v /var/lib/docker:/var/lib/docker" from node service, master service could be restarted successfully. (In reply to Johnny Liu from comment #29) > (In reply to Vivek Goyal from comment #28) > > QE, please do following steps. > > > > - Install/upgrade to docker -41 (docker-1.12.6-41.1.gitf55a118.el7) > > - Edit /etc/systemd/system/atomic-openshift-node.service file and remove > > string "-v /var/lib/docker:/var/lib/docker" > > - systemctl daemon-reload > > - systemctl start atomic-openshift-node > > - systemctl restart atomic-openshift-master-api > > > > If this works, then we are in good shape. > > Just like what you mentioned in your above comments, after removing string > "-v /var/lib/docker:/var/lib/docker", master api/controllers service is > restarted successfully even with docker -40. > > So my question is: > 1). if removing string "-v /var/lib/docker:/var/lib/docker", why need update > docker version. Did you test on same node which you gave me for testing or on a different node. I had replaced /usr/libexec/oci/hooks.d/oci-umount on this node with upstream version. That's included in -41. In my testing oci-umount included with -40 was not working for some reason. > 2). if remove string "-v /var/lib/docker:/var/lib/docker", how to resolve > comment #c23, it is related to openshift functionality. > We did talk to seth and cadvisor should not be affected too negatively. Read details in comment 27. > > And I have a new finding: > if /proc/sys/fs/may_detach_mounts is set to 1, even with docker -40, not > removing "-v /var/lib/docker:/var/lib/docker" from node service, master > service could be restarted successfully. Right. I mentioned this option in comment 26. But this will only work with 7.4 kernel and not with 7.3 kernel. I am trying to find a solution which works with 7.3 kernel as well. Ok, I have queued a PR for oci-umount to be able to handle multiple mappings of same source at multiple destinations inside container and be able to unmount all of them. https://github.com/projectatomic/oci-umount/pull/10 With this PR, there is no need to remove "-v /var/lib/docker/:/var/lib/docker" from atomic-openshift-node.service file. oci-umount will recognize that /var/lib/docker/devicemapper is mounted at two places inside container and unmount both of these. That way container mount points will not be busy inside node container and restart of master-api and master-controllers should work fine. For example, if a container is run with following. docker run -ti -v /:/rootfs -v /var/lib/docker/:/var/lib/docker fedora bash After above PR, oci-umount will see /var/lib/docker/devicemapper on host mounted at two places inside container. /var/lib/docker/devicemapper /rootfs/var/lib/docker/devicemapper And it will unmount both. Hence removing leaking of other container rootfs in the system. (In reply to Vivek Goyal from comment #30) > (In reply to Johnny Liu from comment #29) > > (In reply to Vivek Goyal from comment #28) > > So my question is: > > 1). if removing string "-v /var/lib/docker:/var/lib/docker", why need update > > docker version. > > Did you test on same node which you gave me for testing or on a different > node. I had replaced /usr/libexec/oci/hooks.d/oci-umount on this node with > upstream version. That's included in -41. In my testing oci-umount included > with -40 was not working for some reason. Good to know. Today I tried the same steps on a new install, -40 + NOT removing string "-v /var/lib/docker:/var/lib/docker", still encounter such issue. After I update docker to -41, and not removing string "-v /var/lib/docker:/var/lib/docker", the issue disappeared. I also tried -40 + not removing string "-v /var/lib/docker:/var/lib/docker" + echo 1 >/proc/sys/fs/may_detach_mounts, does not encounter such issue. According to my understanding, our final resolution is: For RHEL74 kernel: not removing string "-v /var/lib/docker:/var/lib/docker" + -41 docker. For RHEL73, our plan is not removing string "-v /var/lib/docker:/var/lib/docker" + -41 docker, but still trying to find a resolution. Am I right? I think we should upgrade to -41 and remove "-v /var/lib/dokcer:/var/lib/docker" and that should work both on 7.3 and 7.4 kernels. Seth is that safe to remove the /var/lib/docker mount in all versions back to 3.4? Scott, for what I'm hearing on the list, is doesn't seem like it would be an issue. I guess I'm sure it is safe as much as I'm sure it is a problem already i.e. the orphaned thin devices eating the storage pool. *** Bug 1470389 has been marked as a duplicate of this bug. *** https://github.com/openshift/openshift-ansible/pull/4748 removes /var/lib/docker mount and installs oci-umount and runc Today I tried the following scenarios on RHEL7.3 (kernel-3.10.0-514.25.2.el7.x86_64): 1). docker -40 + no oci-umount+runc installed + not removing "/var/lib/docker", FAIL 2). docker -40 + oci-umount+runc installed + not removing "/var/lib/docker", FAIL 3). docker -40 + oci-umount+runc installed + removing "/var/lib/docker", PASS 4). docker -45 + no runc installed + not removing "/var/lib/docker", PASS. So according to my above test result, docker -45 will make both rhel73 and rhel74 work without need removing "/var/lib/docker". Anyone could help have a confirm, if yes, I think we do not need remove "/var/lib/docker" from node for minimal code change to reduce the chance of introducing new regression bug. The CI jobs have been failing on my PR to remove /var/lib/docker because the node expects to be able to write to /var/lib/docker/network path. Given that docker-1.12.6-40 was an interim build and that QE says docker-1.12.6-45 works without removing /var/lib/docker I'd like to defer making the change to remove /var/lib/docker from the node. Marking ON_QA for QE to test with docker-1.12.6-47.git0fdc778.el7 which is the latest build attached to the 7.4 errata. Vivek, do you think it's critical that we remove /var/lib/docker right now? Re-test docker-1.12.6-47.git0fdc778.el7.x86_64 on both RHEL74(kernel-3.10.0-693.el7.x86_64) and RHEL73(kernel-3.10.0-514.26.1.el7.x86_64), not removing /var/lib/docker from node, both are working well. Move back to "ASSIGNED", leave it there to do final decision. (In reply to Scott Dodson from comment #43) > The CI jobs have been failing on my PR to remove /var/lib/docker because the > node expects to be able to write to /var/lib/docker/network path. > > Given that docker-1.12.6-40 was an interim build and that QE says > docker-1.12.6-45 works without removing /var/lib/docker I'd like to defer > making the change to remove /var/lib/docker from the node. > > Marking ON_QA for QE to test with docker-1.12.6-47.git0fdc778.el7 which is > the latest build attached to the 7.4 errata. > > Vivek, do you think it's critical that we remove /var/lib/docker right now? Scott, Now oci-umount has the capability to be able to remove multiple mounts inside container. So removing /var/lib/docker/ volume mount is not strictly necessary. (It will be nice though). We found a bug in oci-umount and it was crashing in certain conditions. It has now been fixed in build docker-2:1.12.6-48.git0fdc778. So please use docker -48 for all your future testing and deployments. Moving this to 3.6.1, clearing regression and testblocker flags. I'll try to coordinate with networking and other teams whether or not we can remove /var/lib/docker from the mounts. We cannot remove /var/lib/docker volume today and the problem no longer exists in docker-1.12.6-48 so closing this. This issue was reported with docker -40 version, should not closed as "NOTABUG", change to "CURRENTRELEASE" |