RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1460728 - Container deletions with forceRemove==true may leak hard to clean up DM devices
Summary: Container deletions with forceRemove==true may leak hard to clean up DM devices
Keywords:
Status: CLOSED DUPLICATE of bug 1463534
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Vivek Goyal
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1186913
TreeView+ depends on / blocked
 
Reported: 2017-06-12 14:10 UTC by Sergio Lopez
Modified: 2021-03-11 15:19 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-27 14:10:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Simple python script to keep a device open (164 bytes, text/x-python)
2017-06-12 14:11 UTC, Sergio Lopez
no flags Details

Description Sergio Lopez 2017-06-12 14:10:09 UTC
Description of problem:

On a Docker deployment with device-mappper graphdriver, if a container deletion is issued with forceRemove==true (as it's done with "docker rm --force", or when running a container with the "--rm" argument) and the device is busy at the moment of processing it, a DM internal snapshot device will be leaked.

Those devices aren't shown with usual DM tools, so it's hard to manipulate them. 

Their presence is usually detected with a higher-than-expected usage of the DM pool, and having more "*-init" files than existing containers in $DOCKER_ROOT/devicemapper/metadata.

Version-Release number of selected component (if applicable):

Tested with docker-1.12.6-28, but I think even upstream is affected.

How reproducible:

Always.

Steps to Reproduce:
1. Create a container with auto-removal option ("docker run -it --rm busybox /bin/sh")
2. Find its associated DM device in "dmsetup table", and keep it open (see attached keepopen.py script)
3. Exit from container.

Actual results:

docker process will print this on exit:

Error response from daemon: Driver devicemapper failed to remove root filesystem eac652973003c37a06fa0984fb0e33224494f90c7ee7c952af70304838879077: failed to remove device 0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d:Device is Busy

The container is not present in "docker ps -a" output, but its associated DM devices and their respective metadata files still exist:

# ls /var/lib/docker/devicemapper/metadata/0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d*
/var/lib/docker/devicemapper/metadata/0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d
/var/lib/docker/devicemapper/metadata/0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d-init

# ./docker-dmdump 75 86
INFO[0000] devID 75: meta=0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d-init layer=none size=10737417728 
INFO[0000] devID 76: meta=0ecb3b3a1e6c51ac6300de868f66d83b30ad84a16a9e5bdb0c779e4a15cceb7d layer=none size=10737417728 

Expected results:

For me, one of these results would be acceptable:

 1. Container deletion fails, so it can be retried later. This would break the "forceRemove" semantics, though.

 2. DM devices and the metadata files are deleted, or at least marked for deferred deletion.

Additional info:

IMHO, a solution honoring the "forceRemove" semantics would imply propagating this argument to the graphmapper driver itself, so they're aware that there wouldn't be a second opportunity, and act accordingly.

As an example, in the case of the devicemapper driver, it could enable the deferred deletion strategy for this operation, even if this option isn't explicitly enabled for the driver, as a way to avoid leaking DM devices.

If needed, I can work on a PoC patch implementing this behavior.

Comment 2 Sergio Lopez 2017-06-12 14:11:40 UTC
Created attachment 1287059 [details]
Simple python script to keep a device open

Comment 6 Sergio Lopez 2017-06-27 14:10:36 UTC
As this bug is a symptom of a more general issue, I'm closing this as a duplicate of BZ 1463534

*** This bug has been marked as a duplicate of bug 1463534 ***


Note You need to log in before you can comment on or make changes to this bug.