REVIEW: http://review.gluster.org/5302 (dht: fix dht_discover_cbk doing a wrong layout set.) posted (#2) for review on master by Shishir Gowda (sgowda)
REVIEW: http://review.gluster.org/5302 (dht: fix dht_discover_cbk doing a wrong layout set.) posted (#3) for review on master by Shishir Gowda (sgowda)
COMMIT: http://review.gluster.org/5302 committed in master by Anand Avati (avati) ------ commit ad5ab1216066495589d73015f47183cc26f10eb6 Author: shishir gowda <sgowda> Date: Tue Jul 9 09:09:30 2013 +0530 dht: fix dht_discover_cbk doing a wrong layout set. with the sequence of operations are like below, we have issues with current code (MP == mountpoint): T0,MP1# mkdir /abcd (Succeeds on hash_subvol) T1,MP2# mkdir /abcd (Gets EEXIST as dir exists in hash_subvol) T2,MP2# mkdir /.gfid/<abcd's gfid>/xyz (lookup happens on abcd's gfid, calls dht_discover) T3,MP1# (Completes mkdir(), goes to dir_selfheal to set the layouts). T4,MP2# (dht_discover_cbk gets success for lookup as the entry existed, as layout is not yet written, it says normalize done, found holes). T5,MP2# (as layout anomaly is not considered an issue in this patch, dht_layout_set happens on inode, with all xlators pointing to 0s) T6,MP1# (completes mkdir call, inode has proper layouts) T7,MP2# mkdir /.gfid/<abcd's gfid>/xyz fails with ENOENT (with log saying no subvol found for hash value of xyz. Porting Amar's fix from down-stream beta branch. Change-Id: Ibdc37ee614c96158a1330af19cad81a39bee2651 BUG: 982913 Original-author: Amar Tumballi <amarts> Signed-off-by: shishir gowda <sgowda> Reviewed-on: http://review.gluster.org/5302 Reviewed-by: Amar Tumballi <amarts> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Anand Avati <avati>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user