Bug 1061229
Summary: | glfs_fini leaks threads | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Kelly Burkhart <kelly.burkhart> |
Component: | libgfapi | Assignee: | Ric Wheeler <rwheeler> |
Status: | CLOSED DUPLICATE | QA Contact: | Sudhir D <sdharane> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 3.5.1 | CC: | c.affolter, gluster-bugs, jclift, lmohanty, ndevos, pgurusid, purpleidea, tm |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-06-24 16:14:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Kelly Burkhart
2014-02-04 15:25:44 UTC
Can we please raise the severity to urgent? We can reproduce this when using Libvirt and Qemu with volumes attached via libgfapi: every snapshot taken for a VM gives ~8 new threads in libvirt per-volume (the following is for a VM with 2 volumes): vm-test-02 ~ # ps -eLf | grep -c libvirtd 28 vm-test-02 ~ # ./make-snapshot.sh vm-test-02 ~ # ps -eLf | grep -c libvirtd 44 vm-test-02 ~ # ./cleanup-snapshot.sh vm-test-02 ~ # ps -eLf | grep -c libvirtd 44 vm-test-02 ~ # ./make-snapshot.sh vm-test-02 ~ # ps -eLf | grep -c libvirtd 60 vm-test-02 ~ # ./cleanup-snapshot.sh vm-test-02 ~ # ps -eLf | grep -c libvirtd 60 Almost all of our machines have at least 2 volumes that means for one backup-run on all VMs on one server we have 600 stale threads for libvirt, but we could restart it periodically. Qemu is even worse: vm-test-02 ~ # ps -eLf | grep -c qemu 12 vm-test-02 ~ # ./make-snapshot.sh vm-test-02 ~ # ./cleanup-snapshot.sh vm-test-02 ~ # ps -eLf | grep -c qemu 76 which gives us 32 stale threads per volume. Possibly related to this (for which there's a fix in progress): https://bugzilla.redhat.com/show_bug.cgi?id=1093594 Yes, I think this one can be closed as duplicate of #1093594, unless you want to keep it as backport request of the patches mentioned in #1093594 to 3.5.2 *** This bug has been marked as a duplicate of bug 1093594 *** @Tiziano Müller, if you have a test setup, is it possible to try the patch http://review.gluster.org/#/c/7642/ and see if it fixes? It would be helpful to catch errors early in the cycle. The rpms with this patch can be found at : http://build.gluster.org/job/glusterfs-devrpms/1981/ for fedora or http://build.gluster.org/job/glusterfs-devrpms-el6/1970/ for el6. |