While testing a patch for RFE to add fallocate support to vfs_glusterfs in samba-rhgs, I hit a problem with the read cache created by quick-read not being invalidated in case of glfs_fallocate() or glfs_discard(). To reproduce: 1) untar the attached reproducer. 2) Build the test programs with make all 3) Edit fields in Makefile to set hostname, volume and/or filename/logfile name for test. 4) Run test with make test. The command writes a 2Kb file with pattern 0x11 for each byte written to the file. The second call to t_readv first calls discard on half the file. It then reads the file and checks the bytes on the file for 0 in the first half and 1s on the second half. Expected: # make test ./t_write vm140-111 gv0 test211 log ./t_readv vm140-111 gv0 test211 log Success Actual Result: # make test ./t_write vm140-111 gv0 test211 log ./t_readv vm140-111 gv0 test211 log char at position 0 not zeroed out make: *** [test] Error 251 Attached patch invalidates the cache in quick-read when fallocate, discard or zerofill operations are called.
REVIEW: https://review.gluster.org/19018 (quick-read: Discard cache for fallocate, zerofill and discard ops) posted (#3) for review on master by Sachin Prabhu
COMMIT: https://review.gluster.org/19018 committed in master by \"Sachin Prabhu\" <sprabhu> with a commit message- quick-read: Discard cache for fallocate, zerofill and discard ops The fallocate, zerofill and discard modify file data on the server thus rendering stale any cache held by the xlator on the client. BUG: 1524252 Change-Id: I432146c6390a0cd5869420c373f598da43915f3f Signed-off-by: Sachin Prabhu <sprabhu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report. glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html [2] https://www.gluster.org/pipermail/gluster-users/