Bug 1651915
Summary: | On running "gluster volume status <vol_name> detail" continuously on the backgroud seeing glusterd memory leak | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Upasana <ubansal> |
Component: | glusterd | Assignee: | Sanju <srakonde> |
Status: | CLOSED WONTFIX | QA Contact: | Bala Konda Reddy M <bmekala> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | rhgs-3.4 | CC: | amukherj, nchilaka, puebele, rhs-bugs, sanandpa, sheggodu, srakonde, storage-qa-internal, ubansal, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-01-07 13:39:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Upasana
2018-11-21 07:55:21 UTC
Based on comment 10, we are seeing very less memory leak if we run the command "gluster v status detail" with a sleep of 15 sec. IMO, it is a minor leak which doesn't have any impact. I'm more inclined towards closing this bug. Sweta/Upasana, Please do let me know your thoughts. Sanju - Can we please take periodic statedump by running this command on an interval to see what structure has an increase in memory. Have we done that? I agree with you that there's not much impact of this, but might be worth to fix this in upstream anyways. Did we see this happening in upstream master? (In reply to Atin Mukherjee from comment #15) > Sanju - Can we please take periodic statedump by running this command on an > interval to see what structure has an increase in memory. Have we done that? > I agree with you that there's not much impact of this, but might be worth to > fix this in upstream anyways. Did we see this happening in upstream master? Atin, we haven't tried to look at which structure's memory is growing. In a 3 node cluster, I ran "gluster v status <volname> detail" command for 100000 times in a loop with upstream master. For 1 lakh runs, glusterd's memory increased by 124 MB. In the same cluster I ran the "gluster v status <volname> detail" command for 100000 times in a loop with a sleep of 15 seconds between each run. I haven't observed any increase in glusterd's memory. I believe that, some of the coverity fixes might have fixed this leak in upstream. Given this leak doesn't exist at upstream and we have minor leak at downstream, I don't think its worth to spend time on this. Anyway, I'm always ready to send out a downstream fix for this if we want it. Thanks, Sanju |