+++ This bug was initially created as a clone of Bug #1516206 +++ Description of problem: DISCARD operation on EC volume doesn't punch hole properly in some cases. How reproducible: Always Steps to Reproduce: 1. Create 4+2 EC volume 2. Create file dd if=/dev/urandom of=/mnt/file bs=1024 count=8 3. Punch hole fallocate -p -o 1500 -l 3000 /mnt/file 4. When checked hole size is less than the specified size. Actual results: Expected results: Discard should punch hole of the size specified. --- Additional comment from Worker Ant on 2017-11-22 04:57:37 EST --- REVIEW: https://review.gluster.org/18838 (cluster/ec: EC DISCARD doesn't punch hole properly) posted (#1) for review on master by Sunil Kumar Acharya --- Additional comment from Worker Ant on 2017-11-28 04:35:06 EST --- COMMIT: https://review.gluster.org/18838 committed in master by \"Sunil Kumar Acharya\" <sheggodu> with a commit message- cluster/ec: EC DISCARD doesn't punch hole properly Problem: DISCARD operation on EC volume was punching hole of lesser size than the specified size in some cases. Solution: EC was not handling punch hole for tail part in some cases. Updated the code to handle it appropriately. BUG: 1516206 Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932 Signed-off-by: Sunil Kumar Acharya <sheggodu>
REVIEW: https://review.gluster.org/18877 (cluster/ec: EC DISCARD doesn't punch hole properly) posted (#1) for review on release-3.13 by Sunil Kumar Acharya
COMMIT: https://review.gluster.org/18877 committed in release-3.13 by \"Sunil Kumar Acharya\" <sheggodu> with a commit message- cluster/ec: EC DISCARD doesn't punch hole properly Problem: DISCARD operation on EC volume was punching hole of lesser size than the specified size in some cases. Solution: EC was not handling punch hole for tail part in some cases. Updated the code to handle it appropriately. >BUG: 1516206 >Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932 >Signed-off-by: Sunil Kumar Acharya <sheggodu> BUG: 1518257 Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932 Signed-off-by: Sunil Kumar Acharya <sheggodu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/