Bug 1343374
Summary: | Gluster fuse client crashed generating core dump | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Nithya Balachandran <nbalacha> | |
Component: | transport | Assignee: | Nithya Balachandran <nbalacha> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | mainline | CC: | bkunal, bugs, csaba, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.9.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1343320 | |||
: | 1354250 1360553 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-27 18:16:04 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1343320 | |||
Bug Blocks: | 1354250, 1360553 |
Description
Nithya Balachandran
2016-06-07 08:45:34 UTC
RCA: There is a memory leak in the socket_connect code in case of failure. In socket_connect (): /* if sock != -1, then cleanup is done from the event handler */ if (ret == -1 && sock == -1) { /* Cleaup requires to send notification to upper layer which intern holds the big_lock. There can be dead-lock situation if big_lock is already held by the current thread. So transfer the ownership to seperate thread for cleanup. */ arg = GF_CALLOC (1, sizeof (*arg), gf_sock_connect_error_state_t); arg->this = THIS; arg->trans = this; arg->refd = refd; th_ret = pthread_create (&th_id, NULL, socket_connect_error_cbk, arg); if (th_ret) { gf_log (this->name, GF_LOG_ERROR, "pthread_create" "failed: %s", strerror(errno)); GF_FREE (arg); GF_ASSERT (0); } } pthread_create does not create a detached thread so the thread resources are not cleaned up. socket_connect is called at 3 second intervals so this quickly adds up causing the process to run out of memory. REVIEW: http://review.gluster.org/14661 (rpc/socket: pthread resources are not cleanup up) posted (#1) for review on master by N Balachandran (nbalacha) Fix: Create a detached thread so all thread resources are cleaned up automatically. REVIEW: http://review.gluster.org/14661 (rpc/socket: pthread resources are not cleaned up) posted (#2) for review on master by N Balachandran (nbalacha) REVIEW: http://review.gluster.org/14875 (rpc/socket: pthread resources are not cleaned up) posted (#1) for review on master by N Balachandran (nbalacha) REVIEW: http://review.gluster.org/14875 (rpc/socket: pthread resources are not cleaned up) posted (#2) for review on master by N Balachandran (nbalacha) REVIEW: http://review.gluster.org/14875 (rpc/socket: pthread resources are not cleaned up) posted (#3) for review on master by N Balachandran (nbalacha) COMMIT: http://review.gluster.org/14875 committed in master by Jeff Darcy (jdarcy) ------ commit 9886d568a7a8839bf3acc81cb1111fa372ac5270 Author: N Balachandran <nbalacha> Date: Fri Jul 8 10:46:46 2016 +0530 rpc/socket: pthread resources are not cleaned up A socket_connect failure creates a new pthread which is not a detached thread. As no pthread_join is called, the thread resources are not cleaned up causing a memory leak. Now, socket_connect creates a detached thread to handle failure. Change-Id: Idbf25d312f91464ae20c97d501b628bfdec7cf0c BUG: 1343374 Signed-off-by: N Balachandran <nbalacha> Reviewed-on: http://review.gluster.org/14875 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jdarcy> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/ |