Description of problem:
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Ugh, sorry about that, pressed enter early.
Anyway, if i remove a file on a thinly provisioned device we correctly
pass down DISCARDS to the underlying device. However, we don't do this
when entire thin provisioning devices are deleted.
This is currenly hurting the docker devicemapper backend, but I'll be adding
a workaround for it. However, it may still be a problem for others.
Attaching a test script that shows this using a loopback mount.
Created attachment 837255 [details]
This script creates a loopback based thinp device to show off the bug.
Remove the comment on the blkdiscard line to demo a workaround
Created attachment 837331 [details]
Script trying to workaround this
The workaround of calling blkdiscard on the device before deleting the thin partition device only works on an unshared device. If you ever had any snapshots of the device it doesn't work.
Attached here is an example script that (if you pass it an argument) creates a single device, snapshots it and then deletes first the snapshot then the base, blkdiscarding both, it doesn't free any data in the loopback file. If you don't pass it an argument it creates/destroys only the base image and we fully regain all space.
Any idea how we can regain all space in the snapshoted case?
For reference, here is the upstream Docker issue:
Please try this commit:
It should work for the case described in comment#3 (with or without argument). Please respond with your test results.
In the near-term, DM thin provisioning will not be automatically issuing discards when mappings are removed or thin devices deleted. We can look to do it but it isn't a high priority. thinp relies on upper layers to initiate discards.
Alex, do you need someone to build a scratch kernel with the patch Mike pointed to or can you build your own for testing?
*********** MASS BUG UPDATE **************
This bug has been in a needinfo state for several weeks and is being closed with insufficient data due to inactivity. If this is still an issue with Fedora 20, please feel free to reopen the bug and provide the additional information requested.