Bug 1258313 - Start self-heal and display correct heal info after replace brick
Start self-heal and display correct heal info after replace brick
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: disperse (Show other bugs)
3.7.3
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Pranith Kumar K
:
Depends On: 1254121 1265077 1278284 1304686 1305755
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-31 00:53 EDT by Pranith Kumar K
Modified: 2016-02-09 02:25 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1254121
Environment:
Last Closed: 2015-10-14 06:27:30 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Pranith Kumar K 2015-08-31 00:53:04 EDT
+++ This bug was initially created as a clone of Bug #1254121 +++

Description of problem:

After replacing a brick in disperse volume, shd does not start healing the newly added brick (added by "replace-brick"  command). 
heal info would not display any entries to be healed which is incorrect information.

Full heal is required to be invoked manually to write data on newly added brick.


Version-Release number of selected component (if applicable):

[root@aspandey glusterfs]# gluster --version
glusterfs 3.8dev built on Aug 17 2015 13:13:53
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


How reproducible:

100%

Steps to Reproduce:
1 - Create a 4+2 disperse volume
2 - Fuse mount this volume and write some data (dir/files/links) on mount point
3 - Replace a brick for this volume on server side.
4 - Execute "gluster v heal <vol name> info" - It displays 0 as entry
5 - Check newly added brick location  - No data is written on this brick.

Actual results:

Healing of data on new brick is not getting started as soon as we replace a brick.

Expected results:
Healing of data on new brick should be started as soon as we replace a brick.

Additional info:

--- Additional comment from Anand Avati on 2015-08-17 07:22:24 EDT ---

REVIEW: http://review.gluster.org/11938 (cluster/ec : Self heal all the data on newly added brick in case of "replace-brick command") posted (#1) for review on master by Ashish Pandey (aspandey@redhat.com)

--- Additional comment from Anand Avati on 2015-08-30 14:36:31 EDT ---

REVIEW: http://review.gluster.org/11938 (cluster/ec : Mark new entry changelog in entry self-heal) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 1 Anand Avati 2015-08-31 00:54:15 EDT
REVIEW: http://review.gluster.org/12054 (cluster/ec : Mark new entry changelog in entry self-heal) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 2 Ashish Pandey 2015-10-06 01:55:29 EDT
Previous patch has been abandoned.

Following is the link for new patch -
http://review.gluster.org/12306
Comment 3 Pranith Kumar K 2015-10-14 06:27:30 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 4 Pranith Kumar K 2015-10-14 06:37:24 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.