| Summary: | [glusterfs-v3.3.0qa9-78-gb23d329]: inode leak in glusterfs | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Raghavendra Bhat <rabhat> |
| Component: | core | Assignee: | Amar Tumballi <amarts> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | mainline | CC: | gluster-bugs, vraman |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Raghavendra Bhat
2011-09-20 06:44:45 UTC
The statedump showed some more information indicating leaks (memory also). pool-name=vol-io-cache:rbthash-for-pages hot-count=255 cold-count=1 padded_sizeof=68 alloc-count=131590 max-alloc=256 pool-name=fuse:inode hot-count=16384 cold-count=0 padded_sizeof=164 alloc-count=1360629 max-alloc=16384 [arena.18] arena.18.mem_base=0x2aaac4800000 arena.18.active_cnt=60 arena.18.passive_cnt=4 arena.18.alloc_cnt=9453818 arena.18.max_active=64 arena.18.page_size=131072 There are 304 128k sized iobuf arenas. In the above comment actually 3-4 128k iobuf arenas were there in the statedump not 304. More on it at http://review.gluster.com/504 |