+++ This bug was initially created as a clone of Bug #1599998 +++
Description of problem:
When reserve limits are reached, append on an existing file after truncate operation results to hang.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
gluster v set distrep storage.reserve 50
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output). I have used fallocate to quickly fill the disk.
5) Pick an existing file and truncate that file.
truncate -s 0 fallocate_99
It will throw ENOSPC error.
6) Now on the same file try to append some data.
cat /etc/redhat-release > fallocate_99
append will hang. cat command hung here.
REVIEW: https://review.gluster.org/20527 (afr: switch lk_owner only when pre-op succeeds) posted (#1) for review on master by Ravishankar N
COMMIT: https://review.gluster.org/20527 committed in master by "Ravishankar N" <firstname.lastname@example.org> with a commit message- afr: switch lk_owner only when pre-op succeeds
In a disk full scenario, we take a failure path in afr_transaction_perform_fop()
and go to unlock phase. But we change the lk-owner before that, causing unlock
to fail. When mount issues another fop that takes locks on that file, it hangs.
Change lk-owner only when we are about to perform the fop phase.
Also fix the same issue for arbiters when afr_txn_arbitrate_fop() fails the fop.
Also removed the DISK_SPACE_CHECK_AND_GOTO in posix_xattrop. Otherwise truncate
to zero will fail pre-op phase with ENOSPC when the user is actually trying to
freee up space.
Signed-off-by: Ravishankar N <email@example.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.
glusterfs-5.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.