Bug 1516206 - EC DISCARD doesn't punch hole properly
Summary: EC DISCARD doesn't punch hole properly
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: Sunil Kumar Acharya
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1518255 1518257 1518260
TreeView+ depends on / blocked
 
Reported: 2017-11-22 09:33 UTC by Sunil Kumar Acharya
Modified: 2018-03-15 11:21 UTC (History)
1 user (show)

Fixed In Version: glusterfs-4.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1518255 1518257 1518260 (view as bug list)
Environment:
Last Closed: 2018-03-15 11:21:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sunil Kumar Acharya 2017-11-22 09:33:25 UTC
Description of problem:
DISCARD operation on EC volume doesn't punch hole properly in some cases.



How reproducible:

Always

Steps to Reproduce:
1. Create 4+2 EC volume

2. Create file
dd if=/dev/urandom of=/mnt/file bs=1024 count=8

3. Punch hole
fallocate -p -o 1500 -l 3000 /mnt/file
 
4. When checked hole size is less than the specified size.
Actual results:


Expected results:

Discard should punch hole of the size specified.

Comment 1 Worker Ant 2017-11-22 09:57:37 UTC
REVIEW: https://review.gluster.org/18838 (cluster/ec: EC DISCARD doesn't punch hole properly) posted (#1) for review on master by Sunil Kumar Acharya

Comment 2 Worker Ant 2017-11-28 09:35:06 UTC
COMMIT: https://review.gluster.org/18838 committed in master by \"Sunil Kumar Acharya\" <sheggodu> with a commit message- cluster/ec: EC DISCARD doesn't punch hole properly

Problem:
DISCARD operation on EC volume was punching hole of lesser
size than the specified size in some cases.

Solution:
EC was not handling punch hole for tail part in some cases.
Updated the code to handle it appropriately.

BUG: 1516206
Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932
Signed-off-by: Sunil Kumar Acharya <sheggodu>

Comment 3 Shyamsundar 2018-03-15 11:21:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.