Bug 1061229

Summary: glfs_fini leaks threads
Product: [Community] GlusterFS Reporter: Kelly Burkhart <kelly.burkhart>
Component: libgfapiAssignee: Ric Wheeler <rwheeler>
Status: CLOSED DUPLICATE QA Contact: Sudhir D <sdharane>
Severity: high Docs Contact:
Priority: high    
Version: 3.5.1CC: c.affolter, gluster-bugs, jclift, lmohanty, ndevos, pgurusid, purpleidea, tm
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-06-24 16:14:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kelly Burkhart 2014-02-04 15:25:44 UTC
Description of problem:
glfs_new/glfs_set_volfile/glfs_init results in 4 child threads.  A subsequent call to glfs_fini does not join to and cleanup these threads.

Version-Release number of selected component (if applicable):
3.4.2

How reproducible:
every time

Steps to Reproduce:

for( idx=0; idx<N; ++idx) {
    glfs_new
    glfs_set_volfile_server
    glfs_init
    // pause a bit here
    glfs_fini
}

Actual results:
The above pseudocode will result in N*4 threads on loop exit.

Expected results:
I expect that glfs_fini will clean up all resources associated with the glfs_t.  

Alternately, subsequent glfs_t instances could reuse the previously created threads.  Neither of these behaviors is documented in the header.

Additional info:

Comment 2 Tiziano Müller 2014-06-05 09:49:03 UTC
Can we please raise the severity to urgent?

We can reproduce this when using Libvirt and Qemu with volumes attached via libgfapi: every snapshot taken for a VM gives ~8 new threads in libvirt per-volume (the following is for a VM with 2 volumes):

vm-test-02 ~ # ps -eLf | grep -c libvirtd
28
vm-test-02 ~ # ./make-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
44
vm-test-02 ~ # ./cleanup-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
44
vm-test-02 ~ # ./make-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
60
vm-test-02 ~ # ./cleanup-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
60


Almost all of our machines have at least 2 volumes that means for one backup-run on all VMs on one server we have 600 stale threads for libvirt, but we could restart it periodically.

Qemu is even worse:

vm-test-02 ~ # ps -eLf | grep -c qemu
12
vm-test-02 ~ # ./make-snapshot.sh
vm-test-02 ~ # ./cleanup-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c qemu
76

which gives us 32 stale threads per volume.

Comment 3 Justin Clift 2014-06-24 12:58:24 UTC
Possibly related to this (for which there's a fix in progress):

  https://bugzilla.redhat.com/show_bug.cgi?id=1093594

Comment 4 Tiziano Müller 2014-06-24 15:48:39 UTC
Yes, I think this one can be closed as duplicate of #1093594, unless you want to keep it as backport request of the patches mentioned in #1093594 to 3.5.2

Comment 5 Niels de Vos 2014-06-24 16:14:09 UTC

*** This bug has been marked as a duplicate of bug 1093594 ***

Comment 6 Poornima G 2014-06-25 07:20:52 UTC
@Tiziano Müller, if you have a test setup, is it possible to try the patch http://review.gluster.org/#/c/7642/ and see if it fixes? It would be helpful to catch errors early in the cycle. 

The rpms with this patch can be found at : http://build.gluster.org/job/glusterfs-devrpms/1981/ for fedora or http://build.gluster.org/job/glusterfs-devrpms-el6/1970/ for el6.