REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#1) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#2) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#3) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#4) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#5) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#6) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#7) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9372 (cluster/afr: serialize inode locks) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9372 (cluster/afr: serialize inode locks) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9372 (cluster/afr: serialize inode locks) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9372 (cluster/afr: serialize inode locks) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9372 (cluster/afr: serialize inode locks) posted (#5) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/9372 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit f30af2735cab7475d86665856b433ca409e79ee7 Author: Pranith Kumar K <pkarampu> Date: Wed Dec 31 15:15:53 2014 +0530 cluster/afr: serialize inode locks Problem: Afr winds inodelk calls without any order, so blocking inodelks from two different mounts can lead to dead lock when mount1 gets the lock on brick-1 and blocked on brick-2 where as mount2 gets lock on brick-2 and blocked on brick-1 Fix: Serialize the inodelks whether they are blocking inodelks or non-blocking inodelks. Non-blocking locks also need to be serialized. Otherwise there is a chance that both the mounts which issued same non-blocking inodelk may endup not acquiring the lock on any-brick. Ex: Mount1 and Mount2 request for full length lock on file f1. Mount1 afr may acquire the partial lock on brick-1 and may not acquire the lock on brick-2 because Mount2 already got the lock on brick-2, vice versa. Since both the mounts only got partial locks, afr treats them as failure in gaining the locks and unwinds with EAGAIN errno. Change-Id: Ie6cc3d564638ab3aad586f9a4064d81e42d52aef BUG: 1176008 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9372 Reviewed-by: Krutika Dhananjay <kdhananj> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp>
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#8) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/9302 (cluster/dht: synchronize with other concurrent healers while healing layout.) posted (#9) for review on master by Raghavendra G (rgowdapp)
COMMIT: http://review.gluster.org/9302 committed in master by Raghavendra G (rgowdapp) ------ commit 571a71f0acd0ec59340b9d0d2519793e33a1dc16 Author: Raghavendra G <rgowdapp> Date: Wed Feb 18 12:15:55 2015 +0530 cluster/dht: synchronize with other concurrent healers while healing layout. Current layout heal code assumes layout setting is idempotent. This allowed multiple concurrent healers to set the layout without any synchronization. However, this is not the case as different healers can come up with different layout for same directory and making layout setting non-idempotent. So, we bring in synchronization among healers to 1. Not to overwrite an ondisk well-formed layout. 2. Refresh the in-memory layout with the ondisk layout if in-memory layout needs healing and ondisk layout is well formed. This patch can synchronize 1. among multiple healers. 2. among multiple fix-layouts (which extends layout to consider added or removed brick) 3. (but) not between healers and fix-layouts. So, the problem of in-memory stale layouts (not matching with layout ondisk), is not _completely_ fixed by this patch. Signed-off-by: Raghavendra G <rgowdapp> Change-Id: Ia285f25e8d043bb3175c61468d0d11090acee539 BUG: 1176008 Reviewed-on: http://review.gluster.org/9302 Reviewed-by: N Balachandran <nbalacha>
REVIEW: http://review.gluster.org/9733 (cluster/dht: create request dictionary if necessary during refresh layout.) posted (#1) for review on master by Raghavendra G (rgowdapp)
The patch posted in comment#17 was changed to use BZ#1195668. Moving this BZ to modified based on comment#16.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user