Description of problem: ------------------------ After issuing 'gluster volume heal', 'gluster volume heal info' hangs, when compound-fops is enabled on the replica 3 volume Version-Release number of selected component (if applicable): -------------------------------------------------------------- RHEL 7.3 RHGS 3.2.0 interim build ( glusterfs-3.8.4-5.el7rhgs ) How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create a replica 3 volume 2. Optimize the volume for VM store usecase 3. Enable compound-fops on the volume 4. Create a VM, and install OS 5. While OS installation is in progress, kill brick1 on server1 6. After VM installation is completed, bring back the brick up 7. Trigger self-heal on the volume 8. Get the self-heal info Actual results: --------------- self-heal info command is hung Expected results: ----------------- 'self-heal info' should provide the correct information about un-synced entries Additional info: ---------------- When compound-fops is disabled on the volume, this issue is not seen
I have tested this with qemu's native driver for glusterfs ( which uses gfapi )
Created attachment 1221739 [details] Client statedump taken from qemu process of VM1 using gdb
Created attachment 1221740 [details] Client statedump taken from qemu process of VM2 using gdb
Created attachment 1221741 [details] clients logs from VM1
Created attachment 1221742 [details] client logs from VM2
You do have the brick statedump too, don't you? Could you please attach those as well? -Krutika
(In reply to Krutika Dhananjay from comment #7) > You do have the brick statedump too, don't you? Could you please attach > those as well? > > -Krutika Hi Krutika, I have mistakenly re-provisioned my third server in the cluster to simulate failed node scenario. But I have brick statedump from server1 and server2. I will attach them
Created attachment 1223015 [details] brick1-statedump
Created attachment 1223016 [details] brick2-statedump
As per the triaging we all have the agreement that this BZ has to be fixed in rhgs-3.2.0. Providing devel_ack.
patch on master posted for review at http://review.gluster.org/15929 Moving this bug to POST state.
https://code.engineering.redhat.com/gerrit/#/c/91332/1 <-- that's the downstream patch. Waiting on QE and PM ack before asking for it to be merged.
Tested with glusterfs-3.8.4-10.el7rhgs with the following steps: 1. Created replica 3 sharded volume with compound-fops enabled 2. Optimized the volume for VM store usecase and fuse mounted the volume on the hypervisor 3. Created a sparse image file on the VM and started the OS installation. 4. While VM installation is in progress, killed the first brick 5. After VM installation is completed, brought back the brick and initiated heal on that volume 'gluster volume heal <vol>' 6. Checked for heal status using 'gluster volume heal <vol> info' 'gluster volume heal info' listed the entries that were pending to heal
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html