Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1592666 - lookup not assigning gfid if file is not present in all bricks of replica
lookup not assigning gfid if file is not present in all bricks of replica
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: Ravishankar N
Vijay Avuthu
: Triaged
Depends On: 1591193 1597117 1598121
Blocks: 1503137
  Show dependency treegraph
 
Reported: 2018-06-19 02:12 EDT by Ravishankar N
Modified: 2018-09-04 02:50 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-13
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1591193
Environment:
Last Closed: 2018-09-04 02:49:14 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:50 EDT

  None (edit)
Description Ravishankar N 2018-06-19 02:12:29 EDT
+++ This bug was initially created as a clone of Bug #1591193 +++

Description of problem:


    commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
    wherein if a file is present in only 1 brick of replica *and* doesn't
    have a gfid associated with it, it doesn't get healed upon the next
    lookup from the client. 


Found this while automating a glusto-test case which adds files directly from the backend and expects lookup to assign gfid and complete the heal.

Steps to reproduce:
- Create a 1x3 vol and add different files to different bricks of the replica directly on the backend.
- Try a lookup on the files individually from the client. It will fail with ESTALE.

Comments:
While adding files directly to the bricks is not a supported usecase,  we could hit this in the client FOP path too if the bricks go down at the right time etc.

--- Additional comment from Worker Ant on 2018-06-14 05:18:42 EDT ---

REVIEW: https://review.gluster.org/20271 (afr: heal gfids when file is not present on all bricks) posted (#1) for review on master by Ravishankar N

--- Additional comment from Ravishankar N on 2018-06-14 05:26:22 EDT ---

Correction:

s/ESTALE/ENODATA in the bug description

--- Additional comment from Worker Ant on 2018-06-19 02:05:48 EDT ---

COMMIT: https://review.gluster.org/20271 committed in master by "Pranith Kumar Karampuri" <pkarampu@redhat.com> with a commit message- afr: heal gfids when file is not present on all bricks

commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 introduced a regression
wherein if a file is present in only 1 brick of replica *and* doesn't
have a gfid associated with it, it doesn't get healed upon the next
lookup from the client. Fix it.

Change-Id: I7d1111dcb45b1b8b8340a7d02558f05df70aa599
fixes: bz#1591193
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Comment 3 Ravishankar N 2018-06-19 04:35:32 EDT
Upstream patch https://review.gluster.org/20271  is merged. Will back port it to rhgs-3.4.0 once the bug is accepted.
Comment 10 Vijay Avuthu 2018-07-27 01:29:19 EDT
Update:
==========

Build used: glusterfs-3.12.2-14.el7rhgs.x86_64

Scenario:

1) create 1 * 3 volume and start
2) On each brick, create files from back end
3) From mount point do name lookup
4) check gfid is created for all the files and gfid should be same on all the bricks
5) wait for heal to complete

> Also run below automation cases for gfid heal

tests/functional/afr/test_gfid_heal.py::HealGfidTest_cplex_replicated_glusterfs::test_gfid_heal PASSED


Changing status to Verified.
Comment 11 errata-xmlrpc 2018-09-04 02:49:14 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.