Bug 1516206

Summary: EC DISCARD doesn't punch hole properly
Product: [Community] GlusterFS Reporter: Sunil Kumar Acharya <sheggodu>
Component: disperseAssignee: Sunil Kumar Acharya <sheggodu>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: urgent    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-4.0.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1518255 1518257 1518260 (view as bug list) Environment:
Last Closed: 2018-03-15 11:21:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1518255, 1518257, 1518260    

Description Sunil Kumar Acharya 2017-11-22 09:33:25 UTC
Description of problem:
DISCARD operation on EC volume doesn't punch hole properly in some cases.



How reproducible:

Always

Steps to Reproduce:
1. Create 4+2 EC volume

2. Create file
dd if=/dev/urandom of=/mnt/file bs=1024 count=8

3. Punch hole
fallocate -p -o 1500 -l 3000 /mnt/file
 
4. When checked hole size is less than the specified size.
Actual results:


Expected results:

Discard should punch hole of the size specified.

Comment 1 Worker Ant 2017-11-22 09:57:37 UTC
REVIEW: https://review.gluster.org/18838 (cluster/ec: EC DISCARD doesn't punch hole properly) posted (#1) for review on master by Sunil Kumar Acharya

Comment 2 Worker Ant 2017-11-28 09:35:06 UTC
COMMIT: https://review.gluster.org/18838 committed in master by \"Sunil Kumar Acharya\" <sheggodu> with a commit message- cluster/ec: EC DISCARD doesn't punch hole properly

Problem:
DISCARD operation on EC volume was punching hole of lesser
size than the specified size in some cases.

Solution:
EC was not handling punch hole for tail part in some cases.
Updated the code to handle it appropriately.

BUG: 1516206
Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932
Signed-off-by: Sunil Kumar Acharya <sheggodu>

Comment 3 Shyamsundar 2018-03-15 11:21:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/