Bug 1460728

Summary: Container deletions with forceRemove==true may leak hard to clean up DM devices
Product: Red Hat Enterprise Linux 7 Reporter: Sergio Lopez <slopezpa>
Component: dockerAssignee: Vivek Goyal <vgoyal>
Status: CLOSED DUPLICATE QA Contact: atomic-bugs <atomic-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.3CC: agk, amurdaca, dornelas, lsm5, rhel, rrajaram, santiago, vgoyal
Target Milestone: rcKeywords: Extras
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-27 14:10:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1186913    
Attachments:
Description Flags
Simple python script to keep a device open none

Description Sergio Lopez 2017-06-12 14:10:09 UTC
Description of problem:

On a Docker deployment with device-mappper graphdriver, if a container deletion is issued with forceRemove==true (as it's done with "docker rm --force", or when running a container with the "--rm" argument) and the device is busy at the moment of processing it, a DM internal snapshot device will be leaked.

Those devices aren't shown with usual DM tools, so it's hard to manipulate them. 

Their presence is usually detected with a higher-than-expected usage of the DM pool, and having more "*-init" files than existing containers in $DOCKER_ROOT/devicemapper/metadata.

Version-Release number of selected component (if applicable):

Tested with docker-1.12.6-28, but I think even upstream is affected.

How reproducible:

Always.

Steps to Reproduce:
1. Create a container with auto-removal option ("docker run -it --rm busybox /bin/sh")
2. Find its associated DM device in "dmsetup table", and keep it open (see attached keepopen.py script)
3. Exit from container.

Actual results:

docker process will print this on exit:

Error response from daemon: Driver devicemapper failed to remove root filesystem eac652973003c37a06fa0984fb0e33224494f90c7ee7c952af70304838879077: failed to remove device 0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d:Device is Busy

The container is not present in "docker ps -a" output, but its associated DM devices and their respective metadata files still exist:

# ls /var/lib/docker/devicemapper/metadata/0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d*
/var/lib/docker/devicemapper/metadata/0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d
/var/lib/docker/devicemapper/metadata/0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d-init

# ./docker-dmdump 75 86
INFO[0000] devID 75: meta=0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d-init layer=none size=10737417728 
INFO[0000] devID 76: meta=0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d layer=none size=10737417728 

Expected results:

For me, one of these results would be acceptable:

 1. Container deletion fails, so it can be retried later. This would break the "forceRemove" semantics, though.

 2. DM devices and the metadata files are deleted, or at least marked for deferred deletion.

Additional info:

IMHO, a solution honoring the "forceRemove" semantics would imply propagating this argument to the graphmapper driver itself, so they're aware that there wouldn't be a second opportunity, and act accordingly.

As an example, in the case of the devicemapper driver, it could enable the deferred deletion strategy for this operation, even if this option isn't explicitly enabled for the driver, as a way to avoid leaking DM devices.

If needed, I can work on a PoC patch implementing this behavior.

Comment 2 Sergio Lopez 2017-06-12 14:11:40 UTC
Created attachment 1287059 [details]
Simple python script to keep a device open

Comment 6 Sergio Lopez 2017-06-27 14:10:36 UTC
As this bug is a symptom of a more general issue, I'm closing this as a duplicate of BZ 1463534

*** This bug has been marked as a duplicate of bug 1463534 ***