+++ This bug was initially created as a clone of Bug #1254121 +++ Description of problem: After replacing a brick in disperse volume, shd does not start healing the newly added brick (added by "replace-brick" command). heal info would not display any entries to be healed which is incorrect information. Full heal is required to be invoked manually to write data on newly added brick. Version-Release number of selected component (if applicable): [root@aspandey glusterfs]# gluster --version glusterfs 3.8dev built on Aug 17 2015 13:13:53 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. How reproducible: 100% Steps to Reproduce: 1 - Create a 4+2 disperse volume 2 - Fuse mount this volume and write some data (dir/files/links) on mount point 3 - Replace a brick for this volume on server side. 4 - Execute "gluster v heal <vol name> info" - It displays 0 as entry 5 - Check newly added brick location - No data is written on this brick. Actual results: Healing of data on new brick is not getting started as soon as we replace a brick. Expected results: Healing of data on new brick should be started as soon as we replace a brick. Additional info: --- Additional comment from Anand Avati on 2015-08-17 07:22:24 EDT --- REVIEW: http://review.gluster.org/11938 (cluster/ec : Self heal all the data on newly added brick in case of "replace-brick command") posted (#1) for review on master by Ashish Pandey (aspandey) --- Additional comment from Anand Avati on 2015-08-30 14:36:31 EDT --- REVIEW: http://review.gluster.org/11938 (cluster/ec : Mark new entry changelog in entry self-heal) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/12054 (cluster/ec : Mark new entry changelog in entry self-heal) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
Previous patch has been abandoned. Following is the link for new patch - http://review.gluster.org/12306
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report. glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report. glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user