Description of problem: tcmu-runner is not going to open block with O_SYNC anymore so writes have a chance of getting cached in write-behind when that happens, there is a chance that on failover some data could be stuck in cache and be lost. So strict-o-direct should be on Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/18120 (gluster-block: strict-o-direct should be on) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: https://review.gluster.org/18120 (gluster-block: strict-o-direct should be on) posted (#2) for review on master by Shyamsundar Ranganathan (srangana)
COMMIT: https://review.gluster.org/18120 committed in master by Shyamsundar Ranganathan (srangana) ------ commit 3b5f4de6926780b34570731ad34992a4735dd410 Author: Pranith Kumar K <pkarampu> Date: Mon Aug 28 19:33:30 2017 +0530 gluster-block: strict-o-direct should be on tcmu-runner is not going to open block with O_SYNC anymore so writes have a chance of getting cached in write-behind when that happens, there is a chance that on failover some data could be stuck in cache and be lost. BUG: 1485962 Change-Id: If9835d914821dfc4ff432dc96775677a55d2918f Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: https://review.gluster.org/18120 CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur> Smoke: Gluster Build System <jenkins.org> Tested-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/