Bug 1259078 - should not spawn another migration daemon on graph switch
should not spawn another migration daemon on graph switch
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.5
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Dan Lambright
bugs@gluster.org
:
Depends On: 1226005
Blocks: 1227469 1260923
  Show dependency treegraph
 
Reported: 2015-09-01 19:03 EDT by Dan Lambright
Modified: 2015-10-30 13:32 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1226005
Environment:
Last Closed: 2015-10-14 06:30:01 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dan Lambright 2015-09-01 19:03:37 EDT
+++ This bug was initially created as a clone of Bug #1226005 +++

When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.

--- Additional comment from Anand Avati on 2015-05-28 14:01:39 EDT ---

REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 17:29:10 EDT ---

REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#3) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-05-29 15:56:51 EDT ---

REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#4) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-05-30 06:34:53 EDT ---

REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#5) for review on master by Niels de Vos (ndevos@redhat.com)

--- Additional comment from Anand Avati on 2015-05-30 11:59:50 EDT ---

REVIEW: http://review.gluster.org/10977 (cluster/dht: maintain start state of rebalance daemon across graph switch.) posted (#6) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-06-01 14:12:41 EDT ---

COMMIT: http://review.gluster.org/10977 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit 3f11b8e8ec6d78ebe33636b64130d5d133729f2c
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Thu May 28 14:00:37 2015 -0400

    cluster/dht: maintain start state of rebalance daemon across graph switch.
    
    When we did a graph switch on a rebalance daemon, a second call
    to gf_degrag_start() was done. This lead to multiple threads
    doing migration. When multiple threads try to move the same
    file there can be deadlocks.
    
    Change-Id: I931ca7fe600022f245e3dccaabb1ad004f732c56
    BUG: 1226005
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/10977
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>

--- Additional comment from Anand Avati on 2015-06-26 19:22:05 EDT ---

REVIEW: http://review.gluster.org/11372 (cluster/tier: stop tier migration after graph switch) posted (#10) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-06-26 19:36:02 EDT ---

REVIEW: http://review.gluster.org/11372 (cluster/tier: stop tier migration after graph switch) posted (#11) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-06-26 20:11:22 EDT ---

COMMIT: http://review.gluster.org/11372 committed in master by Dan Lambright (dlambrig@redhat.com) 
------
commit 875aa01ec80e56d85d0bc6028c6f1417f6ab140f
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Tue Jun 23 16:35:03 2015 -0400

    cluster/tier: stop tier migration after graph switch
    
    On a graph switch, a new xlator and private structures are
    created. The tier migration daemon must stop using the
    old xlator and private structures and begin using the
    new ones. Otherwise, when RPCs arrive (such as counter
    queries from glusterd), the new xlator will be consulted
    but it will not have up to date information. The fix
    detects a graph switch and exits the daemon in this
    case. Typical graph switches for the tier case would
    be turning off performance translators.
    
    Change-Id: Ibfbd4720dc82ea179b77c81b8f534abced21e3c8
    BUG: 1226005
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/11372
Comment 1 nchilaka 2015-09-11 08:08:07 EDT
QA Validation:
=============
I Have changed performance volume options to make a graph change and didn't notice any restart of tier deamon. Hence moving to as fixed.
Also I did with IO running
[root@localhost tier]# rpm -qa|grep gluster
'vdsm-gluster-4.16.20-1.2.el7rhgs.noarch
glusterfs-client-xlators-3.7.1-12.el7rhgs.x86_64
glusterfs-server-3.7.1-12.el7rhgs.x86_64
gluster-nagios-common-0.2.0-2.el7rhgs.noarch
glusterfs-3.7.1-12.el7rhgs.x86_64
glusterfs-fuse-3.7.1-12.el7rhgs.x86_64
glusterfs-cli-3.7.1-12.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-12.el7rhgs.x86_64
gluster-nagios-addons-0.2.4-4.el7rhgs.x86_64
glusterfs-libs-3.7.1-12.el7rhgs.x86_64
glusterfs-api-3.7.1-12.el7rhgs.x86_64
glusterfs-rdma-3.7.1-12.el7rhgs.x86_64
[root@localhost tier]# '

Refer bz#1253549 for logs and other details
Comment 2 Pranith Kumar K 2015-10-14 06:30:01 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 3 Pranith Kumar K 2015-10-14 06:38:41 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.