Bug 1487042

Summary: AFR returns the node uuid of the same node for every file in the replica
Product: [Community] GlusterFS Reporter: Sunil Kumar Acharya <sheggodu>
Component: disperseAssignee: Sunil Kumar Acharya <sheggodu>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.10CC: amukherj, aspandey, bugs, jahernan, ksubrahm, nbalacha, pkarampu, ravishankar, rcyriac, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.10.6 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1366817 Environment:
Last Closed: 2017-10-06 17:10:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1315781, 1366817    
Bug Blocks: 1451561, 1451573    

Comment 1 Worker Ant 2017-08-31 06:36:53 UTC
REVIEW: https://review.gluster.org/18148 (cluster/ec: return all node uuids from all subvolumes) posted (#2) for review on release-3.10 by Sunil Kumar Acharya (sheggodu)

Comment 2 Worker Ant 2017-09-01 07:44:56 UTC
COMMIT: https://review.gluster.org/18148 committed in release-3.10 by Raghavendra Talur (rtalur) 
------
commit b7d6c070f161fdd9aa0700d11e624b23cefd36cd
Author: Xavier Hernandez <xhernandez>
Date:   Fri May 12 09:23:47 2017 +0200

    cluster/ec: return all node uuids from all subvolumes
    
    EC was retuning the UUID of the brick with smaller value. This had
    the side effect of not evenly balancing the load between bricks on
    rebalance operations.
    
    This patch modifies the common functions that combine multiple subvolume
    values into a single result to take into account the subvolume order
    and, optionally, other subvolumes that could be damaged.
    
    This makes easier to add future features where brick order is important.
    It also makes possible to easily identify the originating brick of each
    answer, in case some brick will have an special meaning in the future.
    
    >Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
    >BUG: 1366817
    >Signed-off-by: Xavier Hernandez <xhernandez>
    >Reviewed-on: https://review.gluster.org/17297
    >Smoke: Gluster Build System <jenkins.org>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.org>
    >Reviewed-by: Ashish Pandey <aspandey>
    >Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    
    BUG: 1487042
    Change-Id: Iee0a4da710b41224a6dc8e13fa8dcddb36c73a2f
    Signed-off-by: Sunil Kumar Acharya <sheggodu>
    Reviewed-on: https://review.gluster.org/18148
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Xavier Hernandez <xhernandez>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 3 Shyamsundar 2017-10-06 17:10:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.6, please open a new bug report.

glusterfs-3.10.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-October/000084.html
[2] https://www.gluster.org/pipermail/gluster-users/