Description of problem: Test case tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t had failed and retried once on the brick-mux regression run. From the build result it looks like it had failed due to timing issues and the entry in gfid split-brain got resolved but the data heal has not completed when the md5sum of the file was checked with the source. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/20722 (tests: Fix for gfid-mismatch-resolution-with-fav-child-policy.t failure) posted (#1) for review on master by Karthik U S
COMMIT: https://review.gluster.org/20722 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- tests: Fix for gfid-mismatch-resolution-with-fav-child-policy.t failure This test was retried once on build https://build.gluster.org/job/regression-on-demand-multiplex/174/ (logs for the first try is not available with this build) Test case was failing in line #47 where it was was checking for the heal count to be 0. Line #51 had passed that means file got the gfid split brain resolved, and both the bricks had same gfids. At line #54 it again failed which checks for the md5sum on both the bricks. At this point the md5sum of the brick where the file got impunged had the md5sum same as the newly created empty file. This means the data heal has not happened for the file. At line #64 enabling granular-entry-heal faild, but without the logs it is not possible to debug this issue. Change-Id: I56d854dbb9e188cafedfd24a9d463603ae79bd06 fixes: bz#1615331 Signed-off-by: karthik-us <ksubrahm>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/