Bug 1544090
Summary: | possible memleak in glusterfsd process with brick multiplexing on | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Mohit Agrawal <moagrawa> |
Component: | core | Assignee: | Mohit Agrawal <moagrawa> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | mainline | CC: | amukherj, bmekala, bugs, kramdoss, nchilaka, pprakash, rcyriac, rhinduja, rhs-bugs, storage-qa-internal, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | brick-multiplexing | ||
Fixed In Version: | glusterfs-v4.1.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1535281 | Environment: | |
Last Closed: | 2018-06-20 17:59:24 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1535281, 1549473 |
Description
Mohit Agrawal
2018-02-10 06:52:31 UTC
REVIEW: https://review.gluster.org/19537 (glusterfsd: Memleak in glusterfsd process while brick mux is on) posted (#1) for review on master by MOHIT AGRAWAL COMMIT: https://review.gluster.org/19537 committed in master by "Jeff Darcy" <jeff.us> with a commit message- glusterfsd: Memleak in glusterfsd process while brick mux is on Problem: At the time of stopping the volume while brick multiplex is enabled memory is not cleanup from all server side xlators. Solution: To cleanup memory for all server side xlators call fini in glusterfs_handle_terminate after send GF_EVENT_CLEANUP notification to top xlator. BUG: 1544090 Change-Id: Ifa1525e25b697371276158705026b421b4f81140 Signed-off-by: Mohit Agrawal <moagrawa> REVIEW: https://review.gluster.org/19580 (Revert \"glusterfsd: Memleak in glusterfsd process while brick mux is on\") posted (#1) for review on master by MOHIT AGRAWAL COMMIT: https://review.gluster.org/19580 committed in master by "Amar Tumballi" <amarts> with a commit message- Revert "glusterfsd: Memleak in glusterfsd process while brick mux is on" There are still remain some code paths where cleanup is required while brick mux is on.I will upload a new patch after resolve all code paths. This reverts commit b313d97faa766443a7f8128b6e19f3d2f1b267dd. BUG: 1544090 Change-Id: I26ef1d29061092bd9a409c8933d5488e968ed90e Signed-off-by: Mohit Agrawal <moagrawa> REVIEW: https://review.gluster.org/19616 (glusterfsd: Memleak in glusterfsd process while brick mux is on) posted (#1) for review on master by MOHIT AGRAWAL COMMIT: https://review.gluster.org/19616 committed in master by "Amar Tumballi" <amarts> with a commit message- glusterfsd: Memleak in glusterfsd process while brick mux is on Problem: At the time of stopping the volume while brick multiplex is enabled memory is not cleanup from all server side xlators. Solution: To cleanup memory for all server side xlators call fini in glusterfs_handle_terminate after send GF_EVENT_CLEANUP notification to top xlator. BUG: 1544090 Signed-off-by: Mohit Agrawal <moagrawa> Note: Run all test-cases in separate build (https://review.gluster.org/19574) with same patch after enable brick mux forcefully, all test cases are passed. Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc REVIEW: https://review.gluster.org/19734 (gluster: Sometimes Brick process is crashed at the time of stopping brick) posted (#1) for review on master by MOHIT AGRAWAL REVIEW: https://review.gluster.org/19734 (gluster: Sometimes Brick process is crashed at the time of stopping brick) posted (#5) for review on master by MOHIT AGRAWAL COMMIT: https://review.gluster.org/19734 committed in master by "Raghavendra G" <rgowdapp> with a commit message- gluster: Sometimes Brick process is crashed at the time of stopping brick Problem: Sometimes brick process is getting crashed at the time of stop brick while brick mux is enabled. Solution: Brick process was getting crashed because of rpc connection was not cleaning properly while brick mux is enabled.In this patch after sending GF_EVENT_CLEANUP notification to xlator(server) waits for all rpc client connection destroy for specific xlator.Once rpc connections are destroyed in server_rpc_notify for all associated client for that brick then call xlator_mem_cleanup for for brick xlator as well as all child xlators.To avoid races at the time of cleanup introduce two new flags at each xlator cleanup_starting, call_cleanup. BUG: 1544090 Signed-off-by: Mohit Agrawal <moagrawa> Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/) with same patch after enable brick mux forcefully, all test cases are passed. Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf updates: bz#1544090 REVIEW: https://review.gluster.org/19910 (gluster: Brick process can be crash at the time of call xlator cbks) posted (#1) for review on master by MOHIT AGRAWAL REVIEW: https://review.gluster.org/19912 (glusterd: build is failed for glusterd2) posted (#1) for review on master by MOHIT AGRAWAL COMMIT: https://review.gluster.org/19912 committed in master by "MOHIT AGRAWAL" <moagrawa> with a commit message- server: fix unresolved symbols by moving them to libglusterfs Problem: glusterd2 build is failed due to undefined symbol (xlator_mem_cleanup , glusterfsd_ctx) in server.so Solution: To resolve the same done below two changes 1) Move xlator_mem_cleanup code from glusterfsd-mgmt.c to xlator.c to be part of libglusterfs.so 2) replace glusterfsd_ctx to this->ctx because symbol glusterfsd_ctx is not part of server.so BUG: 1544090 Change-Id: Ie5e6fba9ed458931d08eb0948d450aa962424ae5 fixes: bz#1544090 Signed-off-by: Mohit Agrawal <moagrawa> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/ |