+++ This bug was initially created as a clone of Bug #1599998 +++ Description of problem: ======================= When reserve limits are reached, append on an existing file after truncate operation results to hang. Version-Release number of selected component (if applicable): 3.12.2-13.el7rhgs.x86_64 How reproducible: always Steps to Reproduce: =================== 1) Create a Distributed-Replicate volume and start it. 2) Set the storage.reserve limits to 50 using below command, gluster v set distrep storage.reserve 50 3) FUSE mount the volume on a client. 4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output). I have used fallocate to quickly fill the disk. 5) Pick an existing file and truncate that file. truncate -s 0 fallocate_99 It will throw ENOSPC error. 6) Now on the same file try to append some data. cat /etc/redhat-release > fallocate_99 Actual results: =============== append will hang. cat command hung here. Expected results: ================= No hungs.
REVIEW: https://review.gluster.org/20527 (afr: switch lk_owner only when pre-op succeeds) posted (#1) for review on master by Ravishankar N
COMMIT: https://review.gluster.org/20527 committed in master by "Ravishankar N" <ravishankar> with a commit message- afr: switch lk_owner only when pre-op succeeds Problem: In a disk full scenario, we take a failure path in afr_transaction_perform_fop() and go to unlock phase. But we change the lk-owner before that, causing unlock to fail. When mount issues another fop that takes locks on that file, it hangs. Fix: Change lk-owner only when we are about to perform the fop phase. Also fix the same issue for arbiters when afr_txn_arbitrate_fop() fails the fop. Also removed the DISK_SPACE_CHECK_AND_GOTO in posix_xattrop. Otherwise truncate to zero will fail pre-op phase with ENOSPC when the user is actually trying to freee up space. Change-Id: Ic4c8a596b4cdf4a7fc189bf00b561113cf114353 fixes: bz#1602236 Signed-off-by: Ravishankar N <ravishankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/