Bug 765322 (GLUSTER-3590) - [glusterfs-v3.3.0qa9-78-gb23d329]: inode leak in glusterfs
Summary: [glusterfs-v3.3.0qa9-78-gb23d329]: inode leak in glusterfs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-3590
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-20 06:44 UTC by Raghavendra Bhat
Modified: 2015-12-01 16:45 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Raghavendra Bhat 2011-09-20 06:44:45 UTC
There seems to be a inode leak in glusterfs. This is the statedump of the glusterfs client after nightly sanity tests. (drop cached before statedumping).

[xlator.mount.fuse.itable.active.12]
xlator.mount.fuse.itable.active.12.gfid=00000000-0000-0000-0000-000000000000
xlator.mount.fuse.itable.active.12.nlookup=0
xlator.mount.fuse.itable.active.12.ref=1
xlator.mount.fuse.itable.active.12.ino=0
xlator.mount.fuse.itable.active.12.ia_type=0

[xlator.performance.stat-prefetch.inodectx]
xlator.performance.stat-prefetch.inodectx.inode.gfid=00000000-0000-0000-0000-000000000000
xlator.performance.stat-prefetch.inodectx.inode.ino=0
xlator.performance.stat-prefetch.inodectx.looked_up=yes
xlator.performance.stat-prefetch.inodectx.lookup_in_progress=no
xlator.performance.stat-prefetch.inodectx.need_unwind=no
xlator.performance.stat-prefetch.inodectx.op_ret=-1
xlator.performance.stat-prefetch.inodectx.op_errno=2

[xlator.mount.fuse.itable.active.13]
xlator.mount.fuse.itable.active.13.gfid=04e13977-ba6a-44ae-9dc5-38bf5e325d34
xlator.mount.fuse.itable.active.13.nlookup=0
xlator.mount.fuse.itable.active.13.ref=1
xlator.mount.fuse.itable.active.13.ino=-67660301
xlator.mount.fuse.itable.active.13.ia_type=2

[xlator.performance.stat-prefetch.inodectx]
xlator.performance.stat-prefetch.inodectx.inode.gfid=04e13977-ba6a-44ae-9dc5-38bf5e325d34
xlator.performance.stat-prefetch.inodectx.inode.ino=-67660301
xlator.performance.stat-prefetch.inodectx.looked_up=yes
xlator.performance.stat-prefetch.inodectx.lookup_in_progress=no
xlator.performance.stat-prefetch.inodectx.need_unwind=no
xlator.performance.stat-prefetch.inodectx.op_ret=0

Comment 1 Raghavendra Bhat 2011-09-20 07:06:50 UTC
The statedump showed some more information indicating leaks (memory also).


pool-name=vol-io-cache:rbthash-for-pages
hot-count=255
cold-count=1
padded_sizeof=68
alloc-count=131590
max-alloc=256

pool-name=fuse:inode
hot-count=16384
cold-count=0
padded_sizeof=164
alloc-count=1360629
max-alloc=16384

[arena.18]
arena.18.mem_base=0x2aaac4800000
arena.18.active_cnt=60
arena.18.passive_cnt=4
arena.18.alloc_cnt=9453818
arena.18.max_active=64
arena.18.page_size=131072


There are 304 128k sized iobuf arenas.

Comment 2 Raghavendra Bhat 2011-09-20 08:00:17 UTC
In the above comment actually 3-4 128k iobuf arenas were there in the statedump not 304.

Comment 3 Amar Tumballi 2011-09-27 10:28:03 UTC
More on it at http://review.gluster.com/504


Note You need to log in before you can comment on or make changes to this bug.