Bug 1394229 - Memory leak on graph switch
Summary: Memory leak on graph switch
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: libgfapi
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: Vivek Das
URL:
Whiteboard:
: 1167648 (view as bug list)
Depends On: 1403156
Blocks: 1153907
TreeView+ depends on / blocked
 
Reported: 2016-11-11 12:49 UTC by Sahina Bose
Modified: 2018-10-10 08:27 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1403156 (view as bug list)
Environment:
Last Closed: 2018-10-10 08:27:13 UTC
Embargoed:


Attachments (Terms of Use)

Description Sahina Bose 2016-11-11 12:49:14 UTC
Description of problem:

Whenever volume topology changes i.e options are changed/ bricks added or removed, process using libgfapi to access the volume leaks memory.

For long running processes like libvirt used to manage guest, this is an issue.


Version-Release number of selected component (if applicable):
3.2

How reproducible:
NA

Steps to Reproduce:
NA

Comment 1 Poornima G 2016-11-17 12:29:20 UTC
The culprits:
- Inode table of old graph needs cleanup.
        Fix inode leaks
        Fix forget of each xl to free inode ctx properly
- The xl objects itself (xlator_t)
- The mem_accnt structure in every xl object.
        Fix all the leaks so that the ref count of mem_accnt structure is 0
- Implement fini() in every xlator

Comment 5 rjoseph 2017-02-21 07:27:37 UTC
*** Bug 1167648 has been marked as a duplicate of this bug. ***

Comment 10 Yaniv Kaul 2018-10-08 13:36:20 UTC
This is almost 2 years old bug. How relevant is it still?

Comment 11 Sahina Bose 2018-10-09 07:13:12 UTC
(In reply to Yaniv Kaul from comment #10)
> This is almost 2 years old bug. How relevant is it still?

This is still an issue with gfapi access - however not prioritised as RHHI has not made the switch to access via gfapi.

Comment 12 Yaniv Kaul 2018-10-09 08:32:31 UTC
(In reply to Sahina Bose from comment #11)
> (In reply to Yaniv Kaul from comment #10)
> > This is almost 2 years old bug. How relevant is it still?
> 
> This is still an issue with gfapi access - however not prioritised as RHHI
> has not made the switch to access via gfapi.

Are we going to move to gfapi? (given the performance gap is not as high as we though initially?). Can we have an intern working on the leaks (can they be seen with ASAN / valgrind)?

I wish to reduce the amount of old BZs not being handled.

Comment 13 Sahina Bose 2018-10-10 08:27:13 UTC
(In reply to Yaniv Kaul from comment #12)
> (In reply to Sahina Bose from comment #11)
> > (In reply to Yaniv Kaul from comment #10)
> > > This is almost 2 years old bug. How relevant is it still?
> > 
> > This is still an issue with gfapi access - however not prioritised as RHHI
> > has not made the switch to access via gfapi.
> 
> Are we going to move to gfapi? (given the performance gap is not as high as
> we though initially?). Can we have an intern working on the leaks (can they
> be seen with ASAN / valgrind)?
> 
> I wish to reduce the amount of old BZs not being handled.

We do not have immediate plans of supporting gfapi.

Closing this bz for now, will re-open when we enable gfapi and if we find a problem with memory leak


Note You need to log in before you can comment on or make changes to this bug.