Bug 1129529 - Find <mnt> | xargs stat leads to mismatching gfids on files without gfid
Summary: Find <mnt> | xargs stat leads to mismatching gfids on files without gfid
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1136823
TreeView+ depends on / blocked
 
Reported: 2014-08-13 05:49 UTC by Pranith Kumar K
Modified: 2015-05-14 17:43 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1136823 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:27:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Pranith Kumar K 2014-08-13 05:49:15 UTC
Description of problem:
find <mnt> | xargs stat from two mounts in parallel leads to mismatching gfids on replicate volume

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. On both the bricks of replicate volume create the same directory hierarchy
2. have two fuse mounts on this volume
3. In parallel execute find <mnt> | xargs stat
4. find fails with Input/output error. If we do getfattr on that file it gives gfid-mismatch.

Actual results:
Find should succeed and gfid-mismatch should not happen

Expected results:


Additional info:

Comment 1 Anand Avati 2014-08-13 05:51:04 UTC
REVIEW: http://review.gluster.org/8466 (cluster/afr: Perform gfid heal inside locks) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 2 Anand Avati 2014-08-13 06:13:47 UTC
REVIEW: http://review.gluster.org/8466 (cluster/afr: Perform gfid heal inside locks.) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 3 Anand Avati 2014-08-18 02:29:22 UTC
REVIEW: http://review.gluster.org/8466 (cluster/afr: Perform gfid heal inside locks.) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 4 Anand Avati 2014-08-21 10:09:44 UTC
REVIEW: http://review.gluster.org/8466 (cluster/afr: Perform gfid heal inside locks.) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 5 Anand Avati 2014-08-22 07:37:54 UTC
REVIEW: http://review.gluster.org/8512 (cluster/afr: Perform gfid heal inside locks.) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 6 Anand Avati 2014-08-22 09:32:05 UTC
COMMIT: http://review.gluster.org/8512 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 3b70b160a46b22b77a8ad1897440ec1346795a0f
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Wed Aug 13 11:11:17 2014 +0530

    cluster/afr: Perform gfid heal inside locks.
    
    Problem:
    Allowing lookup with 'gfid-req' will lead to assigning gfid at posix layer.
    When two mounts perform lookup in parallel that can lead to both bricks getting
    different gfids leading to gfid-mismatch/EIO for the lookup.
    
    Fix:
    Perform gfid heal inside lock.
    
    BUG: 1129529
    Change-Id: I20c6c5e25ee27eeb906bff2f4c8ad0da18d00090
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/8512
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>

Comment 7 Niels de Vos 2015-05-14 17:27:06 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:37:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:43:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.