Bug 1592666

Summary: lookup not assigning gfid if file is not present in all bricks of replica
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Ravishankar N <ravishankar>
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED ERRATA QA Contact: Vijay Avuthu <vavuthu>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: bugs, nchilaka, rhs-bugs, sankarshan, storage-qa-internal, vdas
Target Milestone: ---Keywords: Triaged
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-13 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1591193 Environment:
Last Closed: 2018-09-04 06:49:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1591193, 1597117, 1598121    
Bug Blocks: 1503137    

Description Ravishankar N 2018-06-19 06:12:29 UTC
+++ This bug was initially created as a clone of Bug #1591193 +++

Description of problem:


    commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
    wherein if a file is present in only 1 brick of replica *and* doesn't
    have a gfid associated with it, it doesn't get healed upon the next
    lookup from the client. 


Found this while automating a glusto-test case which adds files directly from the backend and expects lookup to assign gfid and complete the heal.

Steps to reproduce:
- Create a 1x3 vol and add different files to different bricks of the replica directly on the backend.
- Try a lookup on the files individually from the client. It will fail with ESTALE.

Comments:
While adding files directly to the bricks is not a supported usecase,  we could hit this in the client FOP path too if the bricks go down at the right time etc.

--- Additional comment from Worker Ant on 2018-06-14 05:18:42 EDT ---

REVIEW: https://review.gluster.org/20271 (afr: heal gfids when file is not present on all bricks) posted (#1) for review on master by Ravishankar N

--- Additional comment from Ravishankar N on 2018-06-14 05:26:22 EDT ---

Correction:

s/ESTALE/ENODATA in the bug description

--- Additional comment from Worker Ant on 2018-06-19 02:05:48 EDT ---

COMMIT: https://review.gluster.org/20271 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: heal gfids when file is not present on all bricks

commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1591193
Signed-off-by: Ravishankar N <ravishankar>

Comment 3 Ravishankar N 2018-06-19 08:35:32 UTC
Upstream patch https://review.gluster.org/20271  is merged. Will back port it to rhgs-3.4.0 once the bug is accepted.

Comment 10 Vijay Avuthu 2018-07-27 05:29:19 UTC
Update:
==========

Build used: glusterfs-3.12.2-14.el7rhgs.x86_64

Scenario:

1) create 1 * 3 volume and start
2) On each brick, create files from back end
3) From mount point do name lookup
4) check gfid is created for all the files and gfid should be same on all the bricks
5) wait for heal to complete

> Also run below automation cases for gfid heal

tests/functional/afr/test_gfid_heal.py::HealGfidTest_cplex_replicated_glusterfs::test_gfid_heal PASSED


Changing status to Verified.

Comment 11 errata-xmlrpc 2018-09-04 06:49:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607