Bug 1557876
| Summary: | Fuse mount crashed with only one VM running with its image on that volume | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
| Component: | sharding | Assignee: | Pranith Kumar K <pkarampu> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | bugs <bugs> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | mainline | CC: | bugs, kdhananj, knarra, nbalacha, pkarampu, rhs-bugs, sabose, sasundar, storage-qa-internal |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-v4.1.0 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1556895 | Environment: | |
| Last Closed: | 2018-06-20 18:02:26 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
REVIEW: https://review.gluster.org/19737 (features/shard: Do list_del_init() while list memory is valid) posted (#1) for review on master by Pranith Kumar Karampuri COMMIT: https://review.gluster.org/19737 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- features/shard: Do list_del_init() while list memory is valid Problem: shard_post_lookup_fsync_handler() goes over the list of inode-ctx that need to be fsynced and in cbk it removes each of the inode-ctx from the list. When the first member of list is removed it tries to modifies list head's memory with the latest next/prev and when this happens, there is no guarantee that the list-head which is from stack memory of shard_post_lookup_fsync_handler() is valid. Fix: Do list_del_init() in the loop before winding fsync. BUG: 1557876 Change-Id: If429d3634219e1a435bd0da0ed985c646c59c2ca Signed-off-by: Pranith Kumar K <pkarampu> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/ |
Problem: shard_post_lookup_fsync_handler() goes over the list of inode-ctx that need to be fsynced and in cbk it removes each of the inode-ctx from the list. When the first such member is removed from the list it tries to modifies list head's memory with the latest next/prev and when this happens, there is no guarantee that the stack memory is valid. Fix: Do list_del_init() in the loop before winding fsync.