Problem: shard_post_lookup_fsync_handler() goes over the list of inode-ctx that need to be fsynced and in cbk it removes each of the inode-ctx from the list. When the first such member is removed from the list it tries to modifies list head's memory with the latest next/prev and when this happens, there is no guarantee that the stack memory is valid. Fix: Do list_del_init() in the loop before winding fsync.
https://review.gluster.org/#/c/19737/1
REVIEW: https://review.gluster.org/19737 (features/shard: Do list_del_init() while list memory is valid) posted (#1) for review on master by Pranith Kumar Karampuri
COMMIT: https://review.gluster.org/19737 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- features/shard: Do list_del_init() while list memory is valid Problem: shard_post_lookup_fsync_handler() goes over the list of inode-ctx that need to be fsynced and in cbk it removes each of the inode-ctx from the list. When the first member of list is removed it tries to modifies list head's memory with the latest next/prev and when this happens, there is no guarantee that the list-head which is from stack memory of shard_post_lookup_fsync_handler() is valid. Fix: Do list_del_init() in the loop before winding fsync. BUG: 1557876 Change-Id: If429d3634219e1a435bd0da0ed985c646c59c2ca Signed-off-by: Pranith Kumar K <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/