Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1394229 - Memory leak on graph switch
Memory leak on graph switch
Status: CLOSED DEFERRED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: libgfapi (Show other bugs)
3.2
Unspecified Unspecified
high Severity unspecified
: ---
: ---
Assigned To: Niels de Vos
Vivek Das
: ZStream
: 1167648 (view as bug list)
Depends On: 1403156
Blocks: 1153907
  Show dependency treegraph
 
Reported: 2016-11-11 07:49 EST by Sahina Bose
Modified: 2018-10-10 04:27 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1403156 (view as bug list)
Environment:
Last Closed: 2018-10-10 04:27:13 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Sahina Bose 2016-11-11 07:49:14 EST
Description of problem:

Whenever volume topology changes i.e options are changed/ bricks added or removed, process using libgfapi to access the volume leaks memory.

For long running processes like libvirt used to manage guest, this is an issue.


Version-Release number of selected component (if applicable):
3.2

How reproducible:
NA

Steps to Reproduce:
NA
Comment 1 Poornima G 2016-11-17 07:29:20 EST
The culprits:
- Inode table of old graph needs cleanup.
        Fix inode leaks
        Fix forget of each xl to free inode ctx properly
- The xl objects itself (xlator_t)
- The mem_accnt structure in every xl object.
        Fix all the leaks so that the ref count of mem_accnt structure is 0
- Implement fini() in every xlator
Comment 5 rjoseph 2017-02-21 02:27:37 EST
*** Bug 1167648 has been marked as a duplicate of this bug. ***
Comment 10 Yaniv Kaul 2018-10-08 09:36:20 EDT
This is almost 2 years old bug. How relevant is it still?
Comment 11 Sahina Bose 2018-10-09 03:13:12 EDT
(In reply to Yaniv Kaul from comment #10)
> This is almost 2 years old bug. How relevant is it still?

This is still an issue with gfapi access - however not prioritised as RHHI has not made the switch to access via gfapi.
Comment 12 Yaniv Kaul 2018-10-09 04:32:31 EDT
(In reply to Sahina Bose from comment #11)
> (In reply to Yaniv Kaul from comment #10)
> > This is almost 2 years old bug. How relevant is it still?
> 
> This is still an issue with gfapi access - however not prioritised as RHHI
> has not made the switch to access via gfapi.

Are we going to move to gfapi? (given the performance gap is not as high as we though initially?). Can we have an intern working on the leaks (can they be seen with ASAN / valgrind)?

I wish to reduce the amount of old BZs not being handled.
Comment 13 Sahina Bose 2018-10-10 04:27:13 EDT
(In reply to Yaniv Kaul from comment #12)
> (In reply to Sahina Bose from comment #11)
> > (In reply to Yaniv Kaul from comment #10)
> > > This is almost 2 years old bug. How relevant is it still?
> > 
> > This is still an issue with gfapi access - however not prioritised as RHHI
> > has not made the switch to access via gfapi.
> 
> Are we going to move to gfapi? (given the performance gap is not as high as
> we though initially?). Can we have an intern working on the leaks (can they
> be seen with ASAN / valgrind)?
> 
> I wish to reduce the amount of old BZs not being handled.

We do not have immediate plans of supporting gfapi.

Closing this bz for now, will re-open when we enable gfapi and if we find a problem with memory leak

Note You need to log in before you can comment on or make changes to this bug.