| Summary: | Memory leakage in brick process.[Release-3.3.qa15 | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Vijaykumar Koppad <vkoppad> |
| Component: | core | Assignee: | Raghavendra Bhat <rabhat> |
| Status: | CLOSED WORKSFORME | QA Contact: | Vijaykumar Koppad <vkoppad> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | mainline | CC: | bbandari, gluster-bugs |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0qa4 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-12-04 10:17:39 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Attachments: | |||
|
Description
Vijaykumar Koppad
2011-12-16 11:21:43 UTC
Created attachment 547798 [details]
valgrind log for brick process
Attaching valgrind log
Have a suspicion that this may be result of getting the valgrind log before actually doing a cleanup. Can you make sure to remove every file from the mount point and then try to see the result? (also with a 'echo 3 > /proc/sys/vm/drop_cache') on machine.. Created attachment 552393 [details]
valgrind log for brick process of glusterfs-3.2.6qa1
These logs i get both in mount logs and brick logs. I got the similar logs even in glusterfs-3.2.6qa1. these are the valgrind logs of mount point: ############################################################### 57 bytes in 1 blocks are definitely lost in loss record 28 of 220 ==23005== at 0x4A05FDE: malloc (vg_replace_malloc.c:236) ==23005== by 0x4C5C3FF: __gf_malloc (mem-pool.c:167) ==23005== by 0x62E3445: init (fuse-bridge.c:3643) ==23005== by 0x4C2AB80: __xlator_init (xlator.c:1418) ==23005== by 0x4C2ACAA: xlator_init (xlator.c:1441) ==23005== by 0x403FE0: create_fuse_mount (glusterfsd.c:329) ==23005== by 0x406FC5: main (glusterfsd.c:1497) 512,173 (230,912 direct, 281,261 indirect) bytes in 656 blocks are definitely lost in loss record 209 of 220 ==23005== at 0x4A04A28: calloc (vg_replace_malloc.c:467) ==23005== by 0x4C5C312: __gf_calloc (mem-pool.c:142) ==23005== by 0x4C43A72: __inode_create (inode.c:544) ==23005== by 0x4C43B86: inode_new (inode.c:576) ==23005== by 0x62D933E: fuse_create_resume (fuse-bridge.c:1601) ==23005== by 0x62CF961: fuse_resolve_and_resume (fuse-resolve.c:763) ==23005== by 0x62D9A4D: fuse_create (fuse-bridge.c:1658) ==23005== by 0x62E2224: fuse_thread_proc (fuse-bridge.c:3223) ==23005== by 0x3BF80077E0: start_thread (in /lib64/libpthread-2.12.so) ==23005== by 0xB6A76FF: ??? LEAK SUMMARY: ==23005== definitely lost: 280,732 bytes in 1,423 blocks ==23005== indirectly lost: 302,650 bytes in 1,970 blocks ==23005== possibly lost: 33,948,692 bytes in 6,267 blocks ==23005== still reachable: 38,287 bytes in 68 blocks ==23005== suppressed: 0 bytes in 0 blocks Created attachment 552398 [details]
valgrind log of mount-point for glusterfs-3.2.6qa1
Attaching logs. Vijay, Can you confirm this behavior exists with qa23 VijayKumar, Can you please confirm the behavior with latest build? I have tested with the new build , ie 3.3.0qa37. I got some leaks, i am attaching those logs. Created attachment 579522 [details]
Valgrinf logs of the brick process of the master.
Created attachment 579524 [details]
Valgrinf logs of the second brick process of the master.
CHANGE: http://review.gluster.com/3244 (protocol: fix memory leak of lk-owner buffer in *lk() calls) merged in master by Anand Avati (avati) Moving it to ON_QA considering multiple fixes to handle brick side leaks. Only pending thing is the leaks reported by posix-acl, which is not going into 3.3.0, may be 3.3.1 or so, as it needs more work on RCA. VijayKumar, please verify the behavior with latest master. Created attachment 586283 [details]
Definitely lost valgrind logs from all the bricks.
I still see some definitely lost valgrind logs. It would be good if all the definitely lost logs are removed from the valgrind logs. I went through the definitely lost logs, and none of it is in common path. Hence removing it from the 3.3.0beta blocker list. The remaining ones need posix_acl fixes. In our recent longevity runs we didn't hit any such leaks while running for more than 2weeks. marking fixed in version as 3.4.0qa4 as thats the latest master release on which we run some tests. (longevity started few commits earlier). |