+++ This bug was initially created as a clone of Bug #1591193 +++ Description of problem: commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression wherein if a file is present in only 1 brick of replica *and* doesn't have a gfid associated with it, it doesn't get healed upon the next lookup from the client. Found this while automating a glusto-test case which adds files directly from the backend and expects lookup to assign gfid and complete the heal. Steps to reproduce: - Create a 1x3 vol and add different files to different bricks of the replica directly on the backend. - Try a lookup on the files individually from the client. It will fail with ESTALE. Comments: While adding files directly to the bricks is not a supported usecase, we could hit this in the client FOP path too if the bricks go down at the right time etc. --- Additional comment from Worker Ant on 2018-06-14 05:18:42 EDT --- REVIEW: https://review.gluster.org/20271 (afr: heal gfids when file is not present on all bricks) posted (#1) for review on master by Ravishankar N --- Additional comment from Ravishankar N on 2018-06-14 05:26:22 EDT --- Correction: s/ESTALE/ENODATA in the bug description --- Additional comment from Worker Ant on 2018-06-19 02:05:48 EDT --- COMMIT: https://review.gluster.org/20271 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: heal gfids when file is not present on all bricks commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression wherein if a file is present in only 1 brick of replica *and* doesn't have a gfid associated with it, it doesn't get healed upon the next lookup from the client. Fix it. Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599 fixes: bz#1591193 Signed-off-by: Ravishankar N <ravishankar>
Upstream patch https://review.gluster.org/20271 is merged. Will back port it to rhgs-3.4.0 once the bug is accepted.
Update: ========== Build used: glusterfs-3.12.2-14.el7rhgs.x86_64 Scenario: 1) create 1 * 3 volume and start 2) On each brick, create files from back end 3) From mount point do name lookup 4) check gfid is created for all the files and gfid should be same on all the bricks 5) wait for heal to complete > Also run below automation cases for gfid heal tests/functional/afr/test_gfid_heal.py::HealGfidTest_cplex_replicated_glusterfs::test_gfid_heal PASSED Changing status to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607