Bug 1592666 - lookup not assigning gfid if file is not present in all bricks of replica
Summary: lookup not assigning gfid if file is not present in all bricks of replica
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Ravishankar N
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On: 1591193 1597117 1598121
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-06-19 06:12 UTC by Ravishankar N
Modified: 2018-09-04 06:50 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.12.2-13
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1591193
Environment:
Last Closed: 2018-09-04 06:49:14 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:50:42 UTC

Description Ravishankar N 2018-06-19 06:12:29 UTC
+++ This bug was initially created as a clone of Bug #1591193 +++

Description of problem:


    commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
    wherein if a file is present in only 1 brick of replica *and* doesn't
    have a gfid associated with it, it doesn't get healed upon the next
    lookup from the client. 


Found this while automating a glusto-test case which adds files directly from the backend and expects lookup to assign gfid and complete the heal.

Steps to reproduce:
- Create a 1x3 vol and add different files to different bricks of the replica directly on the backend.
- Try a lookup on the files individually from the client. It will fail with ESTALE.

Comments:
While adding files directly to the bricks is not a supported usecase,  we could hit this in the client FOP path too if the bricks go down at the right time etc.

--- Additional comment from Worker Ant on 2018-06-14 05:18:42 EDT ---

REVIEW: https://review.gluster.org/20271 (afr: heal gfids when file is not present on all bricks) posted (#1) for review on master by Ravishankar N

--- Additional comment from Ravishankar N on 2018-06-14 05:26:22 EDT ---

Correction:

s/ESTALE/ENODATA in the bug description

--- Additional comment from Worker Ant on 2018-06-19 02:05:48 EDT ---

COMMIT: https://review.gluster.org/20271 committed in master by "Pranith Kumar Karampuri" <pkarampu@redhat.com> with a commit message- afr: heal gfids when file is not present on all bricks

commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1591193
Signed-off-by: Ravishankar N <ravishankar@redhat.com>

Comment 3 Ravishankar N 2018-06-19 08:35:32 UTC
Upstream patch https://review.gluster.org/20271  is merged. Will back port it to rhgs-3.4.0 once the bug is accepted.

Comment 10 Vijay Avuthu 2018-07-27 05:29:19 UTC
Update:
==========

Build used: glusterfs-3.12.2-14.el7rhgs.x86_64

Scenario:

1) create 1 * 3 volume and start
2) On each brick, create files from back end
3) From mount point do name lookup
4) check gfid is created for all the files and gfid should be same on all the bricks
5) wait for heal to complete

> Also run below automation cases for gfid heal

tests/functional/afr/test_gfid_heal.py::HealGfidTest_cplex_replicated_glusterfs::test_gfid_heal PASSED


Changing status to Verified.

Comment 11 errata-xmlrpc 2018-09-04 06:49:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.