Description of problem: When optimistic changelog is enabled, we do not set dirty xattr for non data fops in preop. While releasing lock, if everything went well and all the bricks are up, we just send version update on all he bricks of that subvol. This final xattrop to update a version is actually not required in this case as it will just increase the version and will serve no purpose as dirty was not set at all. We will have two advantage if we do not update finale version in this case- 1 - We can avoid sending final xattrop which in turn improves performance. 2 - Let's say the final xattrop, during unlock, failed on 3 bricks (config 4+2) and succeeded on 3 brick because of brick failure or connection fluctuation, then the file will be inaccessible even if stats of that file are fine. If we do not do final xattrop in this case we can avoid one case where this case might hit. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/21105 (cluster/ec: Don't update trusted.ec.version if fop succeeds) posted (#1) for review on master by Ashish Pandey
COMMIT: https://review.gluster.org/21105 committed in master by "Xavi Hernandez" <xhernandez> with a commit message- cluster/ec: Don't update trusted.ec.version if fop succeeds If a fop has succeeded on all the bricks and trying to release the lock, there is no need to update the version for the file/entry. All it will do is to increase the version from x to x+1 on all the bricks. If this update (x to x+1) fails on some brick, this will indicate that the entry is unhealthy while in realty everything is fine with the entry. Avoiding this update will help to not to send one xattrop at the end of the fops. Which will decrease the chances of entries being in unhealthy state and also improve the performance. Change-Id: Id9fca6bd2991425db6ed7d1f36af27027accb636 fixes: bz#1623759 Signed-off-by: Ashish Pandey <aspandey>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/