Bug 1597117 - lookup not assigning gfid if file is not present in all bricks of replica
Summary: lookup not assigning gfid if file is not present in all bricks of replica
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1591193 1598121
Blocks: 1592666
TreeView+ depends on / blocked
 
Reported: 2018-07-02 05:59 UTC by Ravishankar N
Modified: 2018-07-30 18:57 UTC (History)
2 users (show)

Fixed In Version: glusterfs-4.1.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1591193
Environment:
Last Closed: 2018-07-30 18:57:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-07-02 05:59:10 UTC
+++ This bug was initially created as a clone of Bug #1591193 +++

Description of problem:


    commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
    wherein if a file is present in only 1 brick of replica *and* doesn't
    have a gfid associated with it, it doesn't get healed upon the next
    lookup from the client. 


Found this while automating a glusto-test case which adds files directly from the backend and expects lookup to assign gfid and complete the heal.

Steps to reproduce:
- Create a 1x3 vol and add different files to different bricks of the replica directly on the backend.
- Try a lookup on the files individually from the client. It will fail with ESTALE.

Comments:
While adding files directly to the bricks is not a supported usecase,  we could hit this in the client FOP path too if the bricks go down at the right time etc.

--- Additional comment from Worker Ant on 2018-06-14 05:18:42 EDT ---

REVIEW: https://review.gluster.org/20271 (afr: heal gfids when file is not present on all bricks) posted (#1) for review on master by Ravishankar N

--- Additional comment from Ravishankar N on 2018-06-14 05:26:22 EDT ---

Correction:

s/ESTALE/ENODATA in the bug description

--- Additional comment from Worker Ant on 2018-06-19 02:05:48 EDT ---

COMMIT: https://review.gluster.org/20271 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- afr: heal gfids when file is not present on all bricks

commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1591193
Signed-off-by: Ravishankar N <ravishankar>

Comment 1 Worker Ant 2018-07-02 06:11:15 UTC
REVIEW: https://review.gluster.org/20431 (afr: heal gfids when file is not present on all bricks) posted (#2) for review on release-4.1 by Ravishankar N

Comment 2 Worker Ant 2018-07-09 15:18:56 UTC
COMMIT: https://review.gluster.org/20431 committed in release-4.1 by "Shyamsundar Ranganathan" <srangana> with a commit message- afr: heal gfids when file is not present on all bricks

commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1597117
Signed-off-by: Ravishankar N <ravishankar>
(cherry picked from commit eb472d82a083883335bc494b87ea175ac43471ff)

Comment 3 Shyamsundar 2018-07-30 18:57:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.2, please open a new bug report.

glusterfs-4.1.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-July/000106.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.