Description of problem: I was observing this issue while working on BZ 1707259. When there is no self-heals pending, but there are a lot of I/Os happening on a replicate volume with gluster block profile enabled, heal-info was hung. The moment I/O stopped, the command completed successfully. I'm guessing it has something to do with eager locking but I need to RCA it. Version-Release number of selected component (if applicable): rhgs-3.5.0 How reproducible: Always on my dev VMs. Steps to Reproduce: - Create a 1x3 replica volume (3 node setup) - Apply gluster-block profile on the volume. (gluster v set $volname group gluster-block) - Mount a fuse client on another node and run parallel 'dd's : for i in seq{1..20}; do dd if=/dev/urandom of=FILE_$i bs=1024 count=102400& done - After 10-20 seconds while the I/O is going on, run heal-info command - It will be hung. Actual results: heal info command is hung Expected results: It should not be hung.
Ok to do it in a later BU
*** Bug 1483977 has been marked as a duplicate of this bug. ***
*** Bug 1643559 has been marked as a duplicate of this bug. ***
*** Bug 1643081 has been marked as a duplicate of this bug. ***
*** Bug 1763596 has been marked as a duplicate of this bug. ***
*** Bug 1812114 has been marked as a duplicate of this bug. ***
As per #comment51, verified afr in-service upgrade scenarios of gluster and its working as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603