Bug 765322 (GLUSTER-3590)

Summary: [glusterfs-v3.3.0qa9-78-gb23d329]: inode leak in glusterfs
Product: [Community] GlusterFS Reporter: Raghavendra Bhat <rabhat>
Component: coreAssignee: Amar Tumballi <amarts>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: mainlineCC: gluster-bugs, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Raghavendra Bhat 2011-09-20 06:44:45 UTC
There seems to be a inode leak in glusterfs. This is the statedump of the glusterfs client after nightly sanity tests. (drop cached before statedumping).

[xlator.mount.fuse.itable.active.12]
xlator.mount.fuse.itable.active.12.gfid=00000000-0000-0000-0000-000000000000
xlator.mount.fuse.itable.active.12.nlookup=0
xlator.mount.fuse.itable.active.12.ref=1
xlator.mount.fuse.itable.active.12.ino=0
xlator.mount.fuse.itable.active.12.ia_type=0

[xlator.performance.stat-prefetch.inodectx]
xlator.performance.stat-prefetch.inodectx.inode.gfid=00000000-0000-0000-0000-000000000000
xlator.performance.stat-prefetch.inodectx.inode.ino=0
xlator.performance.stat-prefetch.inodectx.looked_up=yes
xlator.performance.stat-prefetch.inodectx.lookup_in_progress=no
xlator.performance.stat-prefetch.inodectx.need_unwind=no
xlator.performance.stat-prefetch.inodectx.op_ret=-1
xlator.performance.stat-prefetch.inodectx.op_errno=2

[xlator.mount.fuse.itable.active.13]
xlator.mount.fuse.itable.active.13.gfid=04e13977-ba6a-44ae-9dc5-38bf5e325d34
xlator.mount.fuse.itable.active.13.nlookup=0
xlator.mount.fuse.itable.active.13.ref=1
xlator.mount.fuse.itable.active.13.ino=-67660301
xlator.mount.fuse.itable.active.13.ia_type=2

[xlator.performance.stat-prefetch.inodectx]
xlator.performance.stat-prefetch.inodectx.inode.gfid=04e13977-ba6a-44ae-9dc5-38bf5e325d34
xlator.performance.stat-prefetch.inodectx.inode.ino=-67660301
xlator.performance.stat-prefetch.inodectx.looked_up=yes
xlator.performance.stat-prefetch.inodectx.lookup_in_progress=no
xlator.performance.stat-prefetch.inodectx.need_unwind=no
xlator.performance.stat-prefetch.inodectx.op_ret=0

Comment 1 Raghavendra Bhat 2011-09-20 07:06:50 UTC
The statedump showed some more information indicating leaks (memory also).


pool-name=vol-io-cache:rbthash-for-pages
hot-count=255
cold-count=1
padded_sizeof=68
alloc-count=131590
max-alloc=256

pool-name=fuse:inode
hot-count=16384
cold-count=0
padded_sizeof=164
alloc-count=1360629
max-alloc=16384

[arena.18]
arena.18.mem_base=0x2aaac4800000
arena.18.active_cnt=60
arena.18.passive_cnt=4
arena.18.alloc_cnt=9453818
arena.18.max_active=64
arena.18.page_size=131072


There are 304 128k sized iobuf arenas.

Comment 2 Raghavendra Bhat 2011-09-20 08:00:17 UTC
In the above comment actually 3-4 128k iobuf arenas were there in the statedump not 304.

Comment 3 Amar Tumballi 2011-09-27 10:28:03 UTC
More on it at http://review.gluster.com/504