Bug 1196020 - glfs_fini() - pending per xlator resource frees
Summary: glfs_fini() - pending per xlator resource frees
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: amainkar
URL:
Whiteboard:
Depends On: 1425623 1473191
Blocks: 1409773 1199436
TreeView+ depends on / blocked
 
Reported: 2015-02-25 05:56 UTC by Poornima G
Modified: 2018-04-16 18:08 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1199436 (view as bug list)
Environment:
Last Closed: 2018-04-16 18:08:37 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Poornima G 2015-02-25 05:56:01 UTC
Description of problem:
1. rpc_transport object not destroyed, the PARENT_DOWN should have
   destroyed this object but has not, needs to be addressed as a part
   of different patch
2. Each xlator fini should clean up the local pool allocated by its xlator.
   Needs to be addresses as a part of different patch.
3. Each xlator should implement forget to free its inode_ctx.
   Needs to be addresses as a part of different patch.
3. Few other leaks reported by valgrind.
4. fd and fd contexts not freed.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Poornima G 2015-03-06 09:44:45 UTC
1. Release the fd list:
- fsync the fds that are open.
- Close and release all the fds that are still open

2. Handle the case when fops on the fs object that is being destroyed is called.

3. Handle the Asserts and leaks in quick_read and read-ahead xlators
Read ahead asserts when conf->files is non empty which is always the case as is never destroyed.
Quick-read asserts when inode_table->lru is not empty, which is almost always the case, fix this leak.


Note You need to log in before you can comment on or make changes to this bug.