Bug 1528733
Summary: | memory leak: get-state leaking memory in small amounts | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> | |
Status: | CLOSED ERRATA | QA Contact: | Bala Konda Reddy M <bmekala> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | rhgs-3.3 | CC: | amukherj, nchilaka, rhinduja, rhs-bugs, sheggodu, storage-qa-internal, vbellur | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.4.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-6 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1531149 (view as bug list) | Environment: | ||
Last Closed: | 2018-09-04 06:40:20 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1531149, 1532475 | |||
Bug Blocks: | 1503137 |
Description
Nag Pavan Chilakam
2017-12-23 06:10:42 UTC
Leaks in: gf_common_mt_gf_timer_t gf_common_mt_asprintf gf_common_mt_strdup gf_common_mt_char gf_common_mt_socket_private_t gf_common_mt_rpcsvc_wrapper_t gf_common_mt_rpc_trans_t Details between two statedumps comparison with one gluster get-state command run below: [mgmt/glusterd.management - usage-type gf_common_mt_gf_timer_t memusage] size=192 num_allocs=3 max_size=384 max_num_allocs=6 total_allocs=434240 vs [mgmt/glusterd.management - usage-type gf_common_mt_gf_timer_t memusage] size=192 num_allocs=3 max_size=384 max_num_allocs=6 total_allocs=434245 [mgmt/glusterd.management - usage-type gf_common_mt_asprintf memusage] size=95942 num_allocs=10204 max_size=95975 max_num_allocs=10205 total_allocs=13049482 vs [mgmt/glusterd.management - usage-type gf_common_mt_asprintf memusage] size=96249 num_allocs=10237 max_size=96282 max_num_allocs=10238 total_allocs=13049519 [mgmt/glusterd.management - usage-type gf_common_mt_strdup memusage] size=7056002 num_allocs=450809 max_size=7056020 max_num_allocs=450810 total_allocs=6742877 vs [mgmt/glusterd.management - usage-type gf_common_mt_strdup memusage] size=7058259 num_allocs=450870 max_size=7058277 max_num_allocs=450871 total_allocs=6742951 [mgmt/glusterd.management - usage-type gf_common_mt_char memusage] size=49290 num_allocs=849 max_size=49367 max_num_allocs=1807 total_allocs=62985416 vs [mgmt/glusterd.management - usage-type gf_common_mt_char memusage] size=49410 num_allocs=850 max_size=49487 max_num_allocs=1807 total_allocs=62985424 [mgmt/glusterd.management - usage-type gf_common_mt_socket_private_t memusage] size=69360 num_allocs=102 max_size=80240 max_num_allocs=118 total_allocs=16222 vs [mgmt/glusterd.management - usage-type gf_common_mt_socket_private_t memusage] size=69360 num_allocs=102 max_size=80240 max_num_allocs=118 total_allocs=16223 mgmt/glusterd.management - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=64 num_allocs=2 max_size=96 max_num_allocs=3 total_allocs=16122 vs [mgmt/glusterd.management - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=64 num_allocs=2 max_size=96 max_num_allocs=3 total_allocs=16123 [mgmt/glusterd.management - usage-type gf_common_mt_rpc_trans_t memusage] size=129744 num_allocs=102 max_size=150096 max_num_allocs=118 total_allocs=16223 vs [mgmt/glusterd.management - usage-type gf_common_mt_rpc_trans_t memusage] size=129744 num_allocs=102 max_size=150096 max_num_allocs=118 total_allocs=16224 Have posted one patch https://review.gluster.org/19139 which reduces the leak by quite a margin. But we still have a small leak identified. (In reply to Atin Mukherjee from comment #8) > Have posted one patch https://review.gluster.org/19139 which reduces the > leak by quite a margin. But we still have a small leak identified. s/identified/unidentified Build : 3.12.2-7 Created some 100 volumes with brick mux enbaled. Ran gluster get-state for every 10 seconds for 13 hours. Once in a while executed gluster vol profile 2cross33_99 start & info There is an increase of 3MB over 13 hours. As there isn't much increase of glusterd memory. Hence marking it as verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 |