REVIEW: http://review.gluster.org/13756 (TIER: stopping the tierd when the volume goes down) posted (#1) for review on release-3.7 by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/13756 (TIER: stopping the tierd when the volume goes down) posted (#2) for review on release-3.7 by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/13756 (TIER: stopping the tierd when the volume goes down) posted (#3) for review on release-3.7 by hari gowtham (hari.gowtham005)
COMMIT: http://review.gluster.org/13756 committed in release-3.7 by Dan Lambright (dlambrig) ------ commit e2bd0563a352e1d22a24f6a8a99beb4d4b8eb2ac Author: hari gowtham <hgowtham> Date: Tue Mar 8 16:38:34 2016 +0530 TIER: stopping the tierd when the volume goes down back-port of : http://review.gluster.org/#/c/13646/ If there are large number of files to be migrated and by this time if the volume goes down, then the tierd has to be stopped. But on a huge query file list it keeps checking for each file before stopping. If the volume comes up before the old tierd dies then due to the presence of old tierd new one won't be created. After the old one completes the task, it dies and the status ends up as failed. This patch will check if the status is still running and then let it continue its work. Else it will stop running the tierd. >Change-Id: I6522a4e2919e84bf502b99b13873795b9274f3cd >BUG: 1315659 >Signed-off-by: hari gowtham <hgowtham> >Reviewed-on: http://review.gluster.org/13646 >Tested-by: Dan Lambright <dlambrig> >Smoke: Gluster Build System <jenkins.com> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.com> >Reviewed-by: Dan Lambright <dlambrig> Change-Id: I8326dbe5edaaea921e5401f39d148aac322c78d0 BUG: 1318498 Signed-off-by: hari <hgowtham> Reviewed-on: http://review.gluster.org/13756 Smoke: Gluster Build System <jenkins.com> Tested-by: hari gowtham <hari.gowtham005> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Dan Lambright <dlambrig>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.10, please open a new bug report. glusterfs-3.7.10 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-April/026164.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user