When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.
REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#1) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#3) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#4) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/10977 (When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks.) posted (#5) for review on master by Niels de Vos (ndevos)
REVIEW: http://review.gluster.org/10977 (cluster/dht: maintain start state of rebalance daemon across graph switch.) posted (#6) for review on master by Dan Lambright (dlambrig)
COMMIT: http://review.gluster.org/10977 committed in master by Vijay Bellur (vbellur) ------ commit 3f11b8e8ec6d78ebe33636b64130d5d133729f2c Author: Dan Lambright <dlambrig> Date: Thu May 28 14:00:37 2015 -0400 cluster/dht: maintain start state of rebalance daemon across graph switch. When we did a graph switch on a rebalance daemon, a second call to gf_degrag_start() was done. This lead to multiple threads doing migration. When multiple threads try to move the same file there can be deadlocks. Change-Id: I931ca7fe600022f245e3dccaabb1ad004f732c56 BUG: 1226005 Signed-off-by: Dan Lambright <dlambrig> Reviewed-on: http://review.gluster.org/10977 Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana>
REVIEW: http://review.gluster.org/11372 (cluster/tier: stop tier migration after graph switch) posted (#10) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/11372 (cluster/tier: stop tier migration after graph switch) posted (#11) for review on master by Dan Lambright (dlambrig)
COMMIT: http://review.gluster.org/11372 committed in master by Dan Lambright (dlambrig) ------ commit 875aa01ec80e56d85d0bc6028c6f1417f6ab140f Author: Dan Lambright <dlambrig> Date: Tue Jun 23 16:35:03 2015 -0400 cluster/tier: stop tier migration after graph switch On a graph switch, a new xlator and private structures are created. The tier migration daemon must stop using the old xlator and private structures and begin using the new ones. Otherwise, when RPCs arrive (such as counter queries from glusterd), the new xlator will be consulted but it will not have up to date information. The fix detects a graph switch and exits the daemon in this case. Typical graph switches for the tier case would be turning off performance translators. Change-Id: Ibfbd4720dc82ea179b77c81b8f534abced21e3c8 BUG: 1226005 Signed-off-by: Dan Lambright <dlambrig> Reviewed-on: http://review.gluster.org/11372
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user