REVIEW: http://review.gluster.org/11078 (cluster/ec: EC_XATTR_DIRTY doesn't come in response) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/11078 (cluster/ec: Don't handle EC_XATTR_DIRTY in response) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/11078 (cluster/ec: Don't handle EC_XATTR_DIRTY in response) posted (#3) for review on master by Vijay Bellur (vbellur)
COMMIT: http://review.gluster.org/11078 committed in master by Vijay Bellur (vbellur) ------ commit 3373379303afa575c0616482c8ab8c3c4a08cc22 Author: Pranith Kumar K <pkarampu> Date: Thu Jun 4 09:52:51 2015 +0530 cluster/ec: Don't handle EC_XATTR_DIRTY in response Problem: ec_update_size_version expects all the keys it did xattrop with to come in response so that it can set the values again in ec_update_size_version_done. But EC_XATTR_DIRTY is not combined so the value won't be present in the response. So ctx->post/pre_dirty are not updated in ec_update_size_version_done. So these values are still non-zero. When ec_unlock_now is called as part of flush's unlock phase it again tries to perform same xattrop for EC_XATTR_DIRTY. But ec_update_size_version is not expected to be called in unlock phase of flush because ec_flush_size_version should have reset everything to zero and unlock is never invoked from ec_update_size_version_done for flush/fsync/fsyncdir. This leads to stale lock which leads to hang. Fix: EC_XATTR_DIRTY is removed in ex_xattrop_cbk and is never combined with other answers. So remove handling of this in the response. Change-Id: If0ea3efec3235a6e312465d8838585fbe752c7ea BUG: 1227654 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/11078 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user