Bug 1425703 - [Disperse] Metadata version is not healing when a brick is down
Summary: [Disperse] Metadata version is not healing when a brick is down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Sunil Kumar Acharya
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1434296 1434298
TreeView+ depends on / blocked
 
Reported: 2017-02-22 07:32 UTC by Ashish Pandey
Modified: 2018-08-07 10:40 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.11.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1434296 1434298 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:44:46 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ashish Pandey 2017-02-22 07:32:08 UTC
Description of problem:

When a brick-1 is down and we create a file on mount point, index entries will be created. Now when this brick is UP and the other brick goes down, heal starts on the bric-1 and it heal all the data and also its version. However on the metadata version on the brick-1 is not getting healed and remains 0




Version-Release number of selected component (if applicable):

[root@apandey glusterfs]# gluster --version
glusterfs 3.11dev
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation

How reproducible:
100%

Steps to Reproduce:
1. Create a (4+2) volume and mount it.
2. Kill a brick, brick-1, and create 10 files on mount point.
3. Start the volume using force and kill other brick, brick-2, immediately.      
4. start index heal and give enough time to heal.
5. At this point all the files on brick-1 should have correct version and size as with other 4 UP bricks and all the files should be healed.

6. Check trusted.ec.{version,size}, it should be same on all the 5 up bricks

Actual results:
Step 6 shows metadata version on brick-1 has not been healed.

Expected results:
Step 6 should show metadata version on brick-1 has been healed and similar to other 4 good UP bricks.

Additional info:

Comment 1 Worker Ant 2017-02-27 10:27:27 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#1) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 2 Worker Ant 2017-03-07 12:57:29 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#2) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 3 Worker Ant 2017-03-15 14:29:05 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#3) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 4 Worker Ant 2017-03-16 10:16:41 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#4) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 5 Worker Ant 2017-03-16 11:34:28 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#5) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 6 Worker Ant 2017-03-17 04:18:27 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#6) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 7 Worker Ant 2017-03-17 06:03:35 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#7) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 8 Worker Ant 2017-03-20 13:45:31 UTC
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#8) for review on master by Sunil Kumar Acharya (sheggodu)

Comment 9 Worker Ant 2017-03-21 07:08:28 UTC
COMMIT: https://review.gluster.org/16772 committed in master by Xavier Hernandez (xhernandez) 
------
commit 0c2253942dd0e6176918a7d530e56053a9f26e6d
Author: Sunil Kumar Acharya <sheggodu>
Date:   Mon Feb 27 15:35:17 2017 +0530

    cluster/ec: Metadata healing fails to update the version
    
    During meatadata heal, we were not updating the version
    though all the inode attributes were in sync.
    
    Updated the code to adjust version when all the inode
    attributes are in sync.
    
    BUG: 1425703
    Change-Id: I6723be3c5f748b286d4efdaf3c71e9d2087c7235
    Signed-off-by: Sunil Kumar Acharya <sheggodu>
    Reviewed-on: https://review.gluster.org/16772
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Xavier Hernandez <xhernandez>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 10 Shyamsundar 2017-05-30 18:44:46 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.