Bug 1394229
Summary: | Memory leak on graph switch | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sahina Bose <sabose> | |
Component: | libgfapi | Assignee: | Niels de Vos <ndevos> | |
Status: | CLOSED DEFERRED | QA Contact: | Vivek Das <vdas> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.2 | CC: | amukherj, pgurusid, rhs-bugs, sabose, senaik, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1403156 (view as bug list) | Environment: | ||
Last Closed: | 2018-10-10 08:27:13 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1403156 | |||
Bug Blocks: | 1153907 |
Description
Sahina Bose
2016-11-11 12:49:14 UTC
The culprits: - Inode table of old graph needs cleanup. Fix inode leaks Fix forget of each xl to free inode ctx properly - The xl objects itself (xlator_t) - The mem_accnt structure in every xl object. Fix all the leaks so that the ref count of mem_accnt structure is 0 - Implement fini() in every xlator *** Bug 1167648 has been marked as a duplicate of this bug. *** This is almost 2 years old bug. How relevant is it still? (In reply to Yaniv Kaul from comment #10) > This is almost 2 years old bug. How relevant is it still? This is still an issue with gfapi access - however not prioritised as RHHI has not made the switch to access via gfapi. (In reply to Sahina Bose from comment #11) > (In reply to Yaniv Kaul from comment #10) > > This is almost 2 years old bug. How relevant is it still? > > This is still an issue with gfapi access - however not prioritised as RHHI > has not made the switch to access via gfapi. Are we going to move to gfapi? (given the performance gap is not as high as we though initially?). Can we have an intern working on the leaks (can they be seen with ASAN / valgrind)? I wish to reduce the amount of old BZs not being handled. (In reply to Yaniv Kaul from comment #12) > (In reply to Sahina Bose from comment #11) > > (In reply to Yaniv Kaul from comment #10) > > > This is almost 2 years old bug. How relevant is it still? > > > > This is still an issue with gfapi access - however not prioritised as RHHI > > has not made the switch to access via gfapi. > > Are we going to move to gfapi? (given the performance gap is not as high as > we though initially?). Can we have an intern working on the leaks (can they > be seen with ASAN / valgrind)? > > I wish to reduce the amount of old BZs not being handled. We do not have immediate plans of supporting gfapi. Closing this bz for now, will re-open when we enable gfapi and if we find a problem with memory leak |