Bug 1394298 - Add hole punch support
Summary: Add hole punch support
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 3.7.8
Hardware: All
OS: All
medium
medium
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-11 15:34 UTC by Ram Ankireddypalle
Modified: 2017-03-08 10:57 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-03-08 10:57:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ram Ankireddypalle 2016-11-11 15:34:32 UTC
Description of problem:
GlusterFS currently does not support hole punching.

Version-Release number of selected component (if applicable):


How reproducible:
Trying to punch a hole in a existing file fails.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:
Hole punch system call should succeed.

Additional info:

Comment 1 Ravishankar N 2016-11-11 16:53:20 UTC
Hi Ram, Need some clarification here. What I meant to say in the mailing-list thread was fallocate with keep-size flag (and therefore the hole-punch flag also) will report incorrect disk usage for the file:


On a normal XFS file system:
==============================
0:root@vm4 ~$ #create a file.
0:root@vm4 ~$ echo > file
0:root@vm4 ~$ #fallocate 1G with keep size flag.
0:root@vm4 ~$ fallocate -o0 -l1G -n file
0:root@vm4 ~$ #check file size and disk space usage.
0:root@vm4 ~$ ll -h file
-rw-r--r--. 1 root root 1 Nov 11 22:09 file
0:root@vm4 ~$ du -h file
1.0G    file
0:root@vm4 ~$
0:root@vm4 ~$ #punch 512MB hole and check usage again
0:root@vm4 ~$ fallocate -o0 -l512M -p file
0:root@vm4 ~$ ll -h file
-rw-r--r--. 1 root root 1 Nov 11 22:10 file
0:root@vm4 ~$ du -h file
512M    file
0:root@vm4 ~$


On a normal plain distribute 1 brick volume using fuse mount:
===============================================================
0:root@vm4 fuse_mnt$ echo >file
0:root@vm4 fuse_mnt$ fallocate -o0 -l1G -n file
0:root@vm4 fuse_mnt$ ll -h file
-rw-r--r--. 1 root root 1 Nov 11 22:13 file
0:root@vm4 fuse_mnt$ du -h file
512     file -->This is wrong.
0:root@vm4 fuse_mnt$
0:root@vm4 fuse_mnt$ #run the command on the backend brick and observe correct disk usage is reported.
0:root@vm4 fuse_mnt$
0:root@vm4 fuse_mnt$ ll -h /bricks/brick1/file
-rw-r--r--. 2 root root 1 Nov 11 22:13 /bricks/brick1/file
0:root@vm4 fuse_mnt$ du -h /bricks/brick1/file
1.0G    /bricks/brick1/file
0:root@vm4 fuse_mnt$
0:root@vm4 fuse_mnt$
0:root@vm4 fuse_mnt$ #punch hole
0:root@vm4 fuse_mnt$ fallocate -o0 -l^C
130:root@vm4 fuse_mnt$ fallocate -o0 -l512M -p file
0:root@vm4 fuse_mnt$ ll -h file
-rw-r--r--. 1 root root 1 Nov 11 22:14 file
0:root@vm4 fuse_mnt$ du -h file
512     file
0:root@vm4 fuse_mnt$
0:root@vm4 fuse_mnt$ ll -h /bricks/brick1/file
-rw-r--r--. 2 root root 1 Nov 11 22:14 /bricks/brick1/file
0:root@vm4 fuse_mnt$ du -h /bricks/brick1/file
512M    /bricks/brick1/file
0:root@vm4 fuse_mnt$
========================================

But you seem to be saying punch hole *fails* on the file. Not sure if we are on the same page here. Does this happen only on disperse volumes? Can you explain the steps in detail?

Comment 2 Ram Ankireddypalle 2016-11-11 17:15:50 UTC
(In reply to Ravishankar N from comment #1)
> Hi Ram, Need some clarification here. What I meant to say in the
> mailing-list thread was fallocate with keep-size flag (and therefore the
> hole-punch flag also) will report incorrect disk usage for the file:
> 
> 
> On a normal XFS file system:
> ==============================
> 0:root@vm4 ~$ #create a file.
> 0:root@vm4 ~$ echo > file
> 0:root@vm4 ~$ #fallocate 1G with keep size flag.
> 0:root@vm4 ~$ fallocate -o0 -l1G -n file
> 0:root@vm4 ~$ #check file size and disk space usage.
> 0:root@vm4 ~$ ll -h file
> -rw-r--r--. 1 root root 1 Nov 11 22:09 file
> 0:root@vm4 ~$ du -h file
> 1.0G    file
> 0:root@vm4 ~$
> 0:root@vm4 ~$ #punch 512MB hole and check usage again
> 0:root@vm4 ~$ fallocate -o0 -l512M -p file
> 0:root@vm4 ~$ ll -h file
> -rw-r--r--. 1 root root 1 Nov 11 22:10 file
> 0:root@vm4 ~$ du -h file
> 512M    file
> 0:root@vm4 ~$
> 
> 
> On a normal plain distribute 1 brick volume using fuse mount:
> ===============================================================
> 0:root@vm4 fuse_mnt$ echo >file
> 0:root@vm4 fuse_mnt$ fallocate -o0 -l1G -n file
> 0:root@vm4 fuse_mnt$ ll -h file
> -rw-r--r--. 1 root root 1 Nov 11 22:13 file
> 0:root@vm4 fuse_mnt$ du -h file
> 512     file -->This is wrong.
> 0:root@vm4 fuse_mnt$
> 0:root@vm4 fuse_mnt$ #run the command on the backend brick and observe
> correct disk usage is reported.
> 0:root@vm4 fuse_mnt$
> 0:root@vm4 fuse_mnt$ ll -h /bricks/brick1/file
> -rw-r--r--. 2 root root 1 Nov 11 22:13 /bricks/brick1/file
> 0:root@vm4 fuse_mnt$ du -h /bricks/brick1/file
> 1.0G    /bricks/brick1/file
> 0:root@vm4 fuse_mnt$
> 0:root@vm4 fuse_mnt$
> 0:root@vm4 fuse_mnt$ #punch hole
> 0:root@vm4 fuse_mnt$ fallocate -o0 -l^C
> 130:root@vm4 fuse_mnt$ fallocate -o0 -l512M -p file
> 0:root@vm4 fuse_mnt$ ll -h file
> -rw-r--r--. 1 root root 1 Nov 11 22:14 file
> 0:root@vm4 fuse_mnt$ du -h file
> 512     file
> 0:root@vm4 fuse_mnt$
> 0:root@vm4 fuse_mnt$ ll -h /bricks/brick1/file
> -rw-r--r--. 2 root root 1 Nov 11 22:14 /bricks/brick1/file
> 0:root@vm4 fuse_mnt$ du -h /bricks/brick1/file
> 512M    /bricks/brick1/file
> 0:root@vm4 fuse_mnt$
> ========================================
> 
> But you seem to be saying punch hole *fails* on the file. Not sure if we are
> on the same page here. Does this happen only on disperse volumes? Can you
> explain the steps in detail?

Ravi,
      Thanks for checking this. For the disperse volumes fallocate fails. In 3.7.8 hole punching support was not available for disperse volumes. During glusterfs developer summit Praneeth K mentioned that support for adding hole punching to disperse volumes is in works. 

[root@flash1 glusterfs]# gluster volume info

Volume Name: FlashStoragePool
Type: Distributed-Disperse
Volume ID: 1c49c595-5117-481d-8abe-83a4c6579c91
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: flash1sds:/ws/disk1/ws_brick
Brick2: flash2sds:/ws/disk1/ws_brick
Brick3: flash3sds:/ws/disk1/ws_brick
Brick4: flash1sds:/ws/disk2/ws_brick
Brick5: flash2sds:/ws/disk2/ws_brick
Brick6: flash3sds:/ws/disk2/ws_brick
Options Reconfigured:
performance.readdir-ahead: on
nfs.export-dirs: off
nfs.export-volumes: off
nfs.disable: on
performance.read-ahead: off
auth.allow: flash1sds,flash2sds,flash3sds

[root@flash1 glusterfs]# mount
...
flash1sds:/FlashStoragePool on /ws/glus type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
...

[root@flash1 glus]# fallocate -o0 -l1G -n test
fallocate: keep size mode (-n option) unsupported

[root@flash1 glus]# fallocate -o0 -l1G  test
fallocate: test: fallocate failed: Operation not supported

ws-glus.log
[2016-11-11 17:02:32.281455] W [fuse-bridge.c:1282:fuse_err_cbk] 0-glusterfs-fuse: 18948: FALLOCATE() ERR => -1 (Operation not supported)
[2016-11-11 17:03:38.036083] W [fuse-bridge.c:1282:fuse_err_cbk] 0-glusterfs-fuse: 18958: FALLOCATE() ERR => -1 (Operation not supported)
[2016-11-11 17:04:10.588579] W [fuse-bridge.c:1282:fuse_err_cbk] 0-glusterfs-fuse: 18964: FALLOCATE() ERR => -1 (Operation not supported)

Comment 3 Ravishankar N 2016-11-14 04:06:27 UTC
http://review.gluster.org/#/c/15200/ seems to be the patch to add fallocate support for disperse volumes.

Comment 4 Kaushal 2017-03-08 10:57:06 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.