Description of problem: ======================== In a 1 x 3 replicate volume 2 bricks were brought down and did some metadata change to the existing file. Executed "gluster volume heal <volume_name> info" to check the file to be self-healed. Following is the message reported: root@rhs-client11 [Dec-26-2013-11:18:20] >gluster v heal vol_rep info *** glibc detected *** /usr/sbin/glfsheal: free(): corrupted unsorted chunks: 0x00000000013ff4d0 *** Brick rhs-client11:/rhs/bricks/vol-rep-b1 Status: Transport endpoint is not connected Brick rhs-client12:/rhs/bricks/vol-rep-b1-rep1 Status: Transport endpoint is not connected Brick rhs-client13.lab.eng.blr.redhat.com:/rhs/bricks/vol-rep-b1-rep2/ /test_file Number of entries: 1 Version-Release number of selected component (if applicable): ============================================================ glusterfs 3.4.0.52rhs built on Dec 19 2013 12:20:16 How reproducible: =============== Often Steps to Reproduce: =================== 1.Create 1 x 3 replicate volume. Start the volume. 2.Create a file from fuse mount. 3.Bring down brick1 and brick2. 4.Perform some meta-data change from mount point 5.Execute : "gluster v heal <volume_name> info" on any of the storage node. Actual results: ================= root@rhs-client11 [Dec-26-2013-11:18:20] >gluster v heal vol_rep info *** glibc detected *** /usr/sbin/glfsheal: free(): corrupted unsorted chunks: 0x00000000013ff4d0 *** Expected results: ================== Shouldn't observe any corruptions.
Both the crashes happen because gf_log is performed after logfile is closed. Marking depends on 1046318.
Patch merged downstream at https://code.engineering.redhat.com/gerrit/#/c/17893/ *** This bug has been marked as a duplicate of bug 1046564 ***