Bug 1394229

Summary: Memory leak on graph switch
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Sahina Bose <sabose>
Component: libgfapiAssignee: Niels de Vos <ndevos>
Status: CLOSED DEFERRED QA Contact: Vivek Das <vdas>
Severity: unspecified Docs Contact:
Priority: high    
Version: rhgs-3.2CC: amukherj, pgurusid, rhs-bugs, sabose, senaik, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1403156 (view as bug list) Environment:
Last Closed: 2018-10-10 08:27:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1403156    
Bug Blocks: 1153907    

Description Sahina Bose 2016-11-11 12:49:14 UTC
Description of problem:

Whenever volume topology changes i.e options are changed/ bricks added or removed, process using libgfapi to access the volume leaks memory.

For long running processes like libvirt used to manage guest, this is an issue.


Version-Release number of selected component (if applicable):
3.2

How reproducible:
NA

Steps to Reproduce:
NA

Comment 1 Poornima G 2016-11-17 12:29:20 UTC
The culprits:
- Inode table of old graph needs cleanup.
        Fix inode leaks
        Fix forget of each xl to free inode ctx properly
- The xl objects itself (xlator_t)
- The mem_accnt structure in every xl object.
        Fix all the leaks so that the ref count of mem_accnt structure is 0
- Implement fini() in every xlator

Comment 5 rjoseph 2017-02-21 07:27:37 UTC
*** Bug 1167648 has been marked as a duplicate of this bug. ***

Comment 10 Yaniv Kaul 2018-10-08 13:36:20 UTC
This is almost 2 years old bug. How relevant is it still?

Comment 11 Sahina Bose 2018-10-09 07:13:12 UTC
(In reply to Yaniv Kaul from comment #10)
> This is almost 2 years old bug. How relevant is it still?

This is still an issue with gfapi access - however not prioritised as RHHI has not made the switch to access via gfapi.

Comment 12 Yaniv Kaul 2018-10-09 08:32:31 UTC
(In reply to Sahina Bose from comment #11)
> (In reply to Yaniv Kaul from comment #10)
> > This is almost 2 years old bug. How relevant is it still?
> 
> This is still an issue with gfapi access - however not prioritised as RHHI
> has not made the switch to access via gfapi.

Are we going to move to gfapi? (given the performance gap is not as high as we though initially?). Can we have an intern working on the leaks (can they be seen with ASAN / valgrind)?

I wish to reduce the amount of old BZs not being handled.

Comment 13 Sahina Bose 2018-10-10 08:27:13 UTC
(In reply to Yaniv Kaul from comment #12)
> (In reply to Sahina Bose from comment #11)
> > (In reply to Yaniv Kaul from comment #10)
> > > This is almost 2 years old bug. How relevant is it still?
> > 
> > This is still an issue with gfapi access - however not prioritised as RHHI
> > has not made the switch to access via gfapi.
> 
> Are we going to move to gfapi? (given the performance gap is not as high as
> we though initially?). Can we have an intern working on the leaks (can they
> be seen with ASAN / valgrind)?
> 
> I wish to reduce the amount of old BZs not being handled.

We do not have immediate plans of supporting gfapi.

Closing this bz for now, will re-open when we enable gfapi and if we find a problem with memory leak