Bug 1061229 - glfs_fini leaks threads
Summary: glfs_fini leaks threads
Keywords:
Status: CLOSED DUPLICATE of bug 1093594
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Ric Wheeler
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-04 15:25 UTC by Kelly Burkhart
Modified: 2014-06-25 07:20 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-06-24 16:14:09 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kelly Burkhart 2014-02-04 15:25:44 UTC
Description of problem:
glfs_new/glfs_set_volfile/glfs_init results in 4 child threads.  A subsequent call to glfs_fini does not join to and cleanup these threads.

Version-Release number of selected component (if applicable):
3.4.2

How reproducible:
every time

Steps to Reproduce:

for( idx=0; idx<N; ++idx) {
    glfs_new
    glfs_set_volfile_server
    glfs_init
    // pause a bit here
    glfs_fini
}

Actual results:
The above pseudocode will result in N*4 threads on loop exit.

Expected results:
I expect that glfs_fini will clean up all resources associated with the glfs_t.  

Alternately, subsequent glfs_t instances could reuse the previously created threads.  Neither of these behaviors is documented in the header.

Additional info:

Comment 2 Tiziano Müller 2014-06-05 09:49:03 UTC
Can we please raise the severity to urgent?

We can reproduce this when using Libvirt and Qemu with volumes attached via libgfapi: every snapshot taken for a VM gives ~8 new threads in libvirt per-volume (the following is for a VM with 2 volumes):

vm-test-02 ~ # ps -eLf | grep -c libvirtd
28
vm-test-02 ~ # ./make-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
44
vm-test-02 ~ # ./cleanup-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
44
vm-test-02 ~ # ./make-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
60
vm-test-02 ~ # ./cleanup-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c libvirtd
60


Almost all of our machines have at least 2 volumes that means for one backup-run on all VMs on one server we have 600 stale threads for libvirt, but we could restart it periodically.

Qemu is even worse:

vm-test-02 ~ # ps -eLf | grep -c qemu
12
vm-test-02 ~ # ./make-snapshot.sh
vm-test-02 ~ # ./cleanup-snapshot.sh
vm-test-02 ~ # ps -eLf | grep -c qemu
76

which gives us 32 stale threads per volume.

Comment 3 Justin Clift 2014-06-24 12:58:24 UTC
Possibly related to this (for which there's a fix in progress):

  https://bugzilla.redhat.com/show_bug.cgi?id=1093594

Comment 4 Tiziano Müller 2014-06-24 15:48:39 UTC
Yes, I think this one can be closed as duplicate of #1093594, unless you want to keep it as backport request of the patches mentioned in #1093594 to 3.5.2

Comment 5 Niels de Vos 2014-06-24 16:14:09 UTC

*** This bug has been marked as a duplicate of bug 1093594 ***

Comment 6 Poornima G 2014-06-25 07:20:52 UTC
@Tiziano Müller, if you have a test setup, is it possible to try the patch http://review.gluster.org/#/c/7642/ and see if it fixes? It would be helpful to catch errors early in the cycle. 

The rpms with this patch can be found at : http://build.gluster.org/job/glusterfs-devrpms/1981/ for fedora or http://build.gluster.org/job/glusterfs-devrpms-el6/1970/ for el6.


Note You need to log in before you can comment on or make changes to this bug.