I/O load generating script:
#open( $h, '>>', "$ARGV" ) or die $!;
open( $h, '>', "$ARGV" ) or die $!;
#open( $h, '>>', "$ARGV" ) or die $!; #opening file for write in append mode, filename is fed at cli argument
seek( $h, $offset, 1 );
#print $h "offset:$offset \n ";
print $h "offset:$offset ";
#$offset=$offset + 20;
$offset=$offset + 1;
1.create a 1x3 volume mount it on fuse client
2.run the perl script above on an input file where IOs must happen(infinite loop where writes)
eg: script.pl <some_filename>
3. now reboot brick
Machine becomes slow and operations will be very very slow.
REVIEW: https://review.gluster.org/21339 (cluster/afr: Batch writes in same lock even when multiple fds are open) posted (#1) for review on release-4.1 by Pranith Kumar Karampuri
COMMIT: https://review.gluster.org/21339 committed in release-4.1 by "Shyamsundar Ranganathan" <firstname.lastname@example.org> with a commit message- cluster/afr: Batch writes in same lock even when multiple fds are open
When eager-lock is disabled because of multiple-fds opened and app
writes come on conflicting regions, the number of locks grows very
fast leading to all the CPU being spent just in locking and unlocking
by traversing huge queues in locks xlator for granting locks.
Reduce the number of locks in transit by bundling the writes in the
same lock and disable delayed piggy-pack when we learn that multiple
fds are open on the file. This will reduce the size of queues in the
locks xlator. This also reduces the number of network calls like
Please note that this problem can still happen if eager-lock is
disabled as the writes will not be bundled in the same lock.
Signed-off-by: Pranith Kumar K <email@example.com>
*** Bug 1635977 has been marked as a duplicate of this bug. ***
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.6, please open a new bug report.
glusterfs-4.1.6 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.