+++ This bug was initially created as a clone of Bug #1284365 +++ Description of problem: After an extending write is complete, at the level of shard translator, the postbuf is updated not once but twice with the same delta size and block count: once in shard_update_file_size_cbk(), and once in shard_post_update_size_writev_handler() This can lead to unexpected behavior is md-cache is part of the client stack and caches these values returned by shard translator in postbuf. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Vijay Bellur on 2015-11-23 02:42:36 EST --- REVIEW: http://review.gluster.org/12717 (features/shard: Eliminate extra update to postbuf in writev) posted (#1) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2015-11-23 07:38:35 EST --- REVIEW: http://review.gluster.org/12717 (features/shard: Eliminate extra update to postbuf in writev) posted (#2) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2015-11-23 13:58:10 EST --- REVIEW: http://review.gluster.org/12717 (features/shard: Eliminate extra update to postbuf in writev) posted (#3) for review on master by Vijay Bellur (vbellur) --- Additional comment from Vijay Bellur on 2015-11-24 01:20:06 EST --- COMMIT: http://review.gluster.org/12717 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit c93e436527e9d2ceed46b939e18edc40b7018cee Author: Krutika Dhananjay <kdhananj> Date: Mon Nov 23 13:06:25 2015 +0530 features/shard: Eliminate extra update to postbuf in writev After an extending write is complete, shard translator updates postbuf at two places: 1. shard_update_file_size_cbk(), and 2. shard_post_update_size_writev_handler(). This can lead to unexpected behavior if md-cache is part of the client stack and caches and serves values returned by shard translator in postbuf. This patch eliminates the update to postbuf in shard_post_update_size_writev_handler(). Change-Id: I9d107bf57baad66886eebec14aa369b6a3c88c49 BUG: 1284365 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/12717 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
REVIEW: http://review.gluster.org/12737 (features/shard: Eliminate extra update to postbuf in writev) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/12737 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 5c751eba5f392bbcea5b329867112513faaf8366 Author: Krutika Dhananjay <kdhananj> Date: Mon Nov 23 13:06:25 2015 +0530 features/shard: Eliminate extra update to postbuf in writev Backport of: http://review.gluster.org/#/c/12717/ After an extending write is complete, shard translator updates postbuf at two places: 1. shard_update_file_size_cbk(), and 2. shard_post_update_size_writev_handler(). This can lead to unexpected behavior if md-cache is part of the client stack and caches and serves values returned by shard translator in postbuf. This patch eliminates the update to postbuf in shard_post_update_size_writev_handler(). Change-Id: I1b97a46931b12d5a2f5d60877e57e0caf9e9fcb6 BUG: 1285139 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/12737 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report. glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user