Description of problem: When a brick-1 is down and we create a file on mount point, index entries will be created. Now when this brick is UP and the other brick goes down, heal starts on the bric-1 and it heal all the data and also its version. However on the metadata version on the brick-1 is not getting healed and remains 0 Version-Release number of selected component (if applicable): [root@apandey glusterfs]# gluster --version glusterfs 3.11dev Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation How reproducible: 100% Steps to Reproduce: 1. Create a (4+2) volume and mount it. 2. Kill a brick, brick-1, and create 10 files on mount point. 3. Start the volume using force and kill other brick, brick-2, immediately. 4. start index heal and give enough time to heal. 5. At this point all the files on brick-1 should have correct version and size as with other 4 UP bricks and all the files should be healed. 6. Check trusted.ec.{version,size}, it should be same on all the 5 up bricks Actual results: Step 6 shows metadata version on brick-1 has not been healed. Expected results: Step 6 should show metadata version on brick-1 has been healed and similar to other 4 good UP bricks. Additional info:
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#1) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#2) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#3) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#4) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#5) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#6) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#7) for review on master by Sunil Kumar Acharya (sheggodu)
REVIEW: https://review.gluster.org/16772 (cluster/ec: Metadata healing fails to update the version) posted (#8) for review on master by Sunil Kumar Acharya (sheggodu)
COMMIT: https://review.gluster.org/16772 committed in master by Xavier Hernandez (xhernandez) ------ commit 0c2253942dd0e6176918a7d530e56053a9f26e6d Author: Sunil Kumar Acharya <sheggodu> Date: Mon Feb 27 15:35:17 2017 +0530 cluster/ec: Metadata healing fails to update the version During meatadata heal, we were not updating the version though all the inode attributes were in sync. Updated the code to adjust version when all the inode attributes are in sync. BUG: 1425703 Change-Id: I6723be3c5f748b286d4efdaf3c71e9d2087c7235 Signed-off-by: Sunil Kumar Acharya <sheggodu> Reviewed-on: https://review.gluster.org/16772 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Xavier Hernandez <xhernandez> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/