Hide Forgot
Description of problem: There are memory leaks on libvirt client. Version-Release number of selected component (if applicable): # rpm -q libvirt-python python libvirt-sandbox kernel libvirt-python-1.1.1-3.el7.x86_64 python-2.7.5-7.el7.x86_64 libvirt-sandbox-0.5.0-3.el7.x86_64 kernel-3.10.0-0.rc7.64.el7.x86_64 How reproducible: always Steps to Reproduce: 1. # cat test1.py import sys import libvirt_qemu if __name__ == "__main__": name = sys.argv[1] con = libvirt_qemu.libvirt.open(None) dom = con.lookupByName(name) libvirt_qemu.qemuMonitorCommand(dom, 'info cpus', 1) 2. # cat test1.py import sys import libvirt_lxc if __name__ == "__main__": name = sys.argv[1] con = libvirt_lxc.libvirt.open('lxc:///') dom = con.lookupByName(name) fd = libvirt_lxc.lxcOpenNamespace(dom, 0) 3. run above test cases # valgrind -v --leak-check=full python test1.py foo # valgrind -v --leak-check=full python test1.py bar Notes, please change 'foo', 'bar' to your running qemu, lxc guests. Actual results: ==20591== 59 bytes in 1 blocks are definitely lost in loss record 277 of 1,519 ==20591== at 0x4A0883C: malloc (vg_replace_malloc.c:270) ==20591== by 0x3E4A886349: strdup (in /usr/lib64/libc-2.17.so) ==20591== by 0xBDF2074: virStrdup (virstring.c:554) ==20591== by 0xBEB3B9C: remoteDomainQemuMonitorCommand (remote_driver.c:5179) ==20591== by 0xBB5BD21: virDomainQemuMonitorCommand (libvirt-qemu.c:100) ==20591== by 0xB94D564: libvirt_qemu_virDomainQemuMonitorCommand (libvirt-qemu-override.c:75) ==20591== by 0x3EF4ADDCED: PyEval_EvalFrameEx (in /usr/lib64/libpython2.7.so.1.0) ==20591== by 0x3EF4ADD80B: PyEval_EvalFrameEx (in /usr/lib64/libpython2.7.so.1.0) ==20591== by 0x3EF4ADEC7C: PyEval_EvalCodeEx (in /usr/lib64/libpython2.7.so.1.0) ==20591== by 0x3EF4ADED81: PyEval_EvalCode (in /usr/lib64/libpython2.7.so.1.0) ==20591== by 0x3EF4AF78AE: ??? (in /usr/lib64/libpython2.7.so.1.0) ==20591== by 0x3EF4AF89CD: PyRun_FileExFlags (in /usr/lib64/libpython2.7.so.1.0) ==20257== 20 bytes in 1 blocks are definitely lost in loss record 70 of 2,269 ==20257== at 0x4A06B2F: calloc (vg_replace_malloc.c:593) ==20257== by 0xBDB357C: virAllocN (viralloc.c:183) ==20257== by 0xBECAC74: virNetClientProgramCall (virnetclientprogram.c:357) ==20257== by 0xBEA6311: callFull.isra.2 (remote_driver.c:5708) ==20257== by 0xBEA83C5: remoteDomainLxcOpenNamespace (remote_driver.c:6035) ==20257== by 0xBB5C164: virDomainLxcOpenNamespace (libvirt-lxc.c:94) ==20257== by 0xB94D3BE: libvirt_lxc_virDomainLxcOpenNamespace (libvirt-lxc-override.c:77) ==20257== by 0x3EF4ADDCED: PyEval_EvalFrameEx (in /usr/lib64/libpython2.7.so.1.0) ==20257== by 0x3EF4ADD80B: PyEval_EvalFrameEx (in /usr/lib64/libpython2.7.so.1.0) ==20257== by 0x3EF4ADEC7C: PyEval_EvalCodeEx (in /usr/lib64/libpython2.7.so.1.0) ==20257== by 0x3EF4ADED81: PyEval_EvalCode (in /usr/lib64/libpython2.7.so.1.0) ==20257== by 0x3EF4AF78AE: ??? (in /usr/lib64/libpython2.7.so.1.0) Expected results: Fix memory leaks. Additional info:
(In reply to Alex Jia from comment #0) > 2. # cat test1.py s/test1/test2/ > 3. run above test cases > > # valgrind -v --leak-check=full python test1.py foo > # valgrind -v --leak-check=full python test1.py bar s/test1/test2/
Upstream patch sent: https://www.redhat.com/archives/libvir-list/2013-September/msg00120.html
Now pushed upstream: commit 418137663f1ca0877a176b996ac027d89747fe90 Author: Ján Tomko <jtomko> AuthorDate: 2013-09-03 13:12:37 +0200 Commit: Ján Tomko <jtomko> CommitDate: 2013-09-03 13:19:17 +0200 Fix leaks in python bindings https://bugzilla.redhat.com/show_bug.cgi?id=1003828 git describe: v1.1.2-18-g4181376 Downstream patch posted: http://post-office.corp.redhat.com/archives/rhvirt-patches/2013-September/msg00022.html
Created attachment 795492 [details] valgrind libvirt-python qemu domain check
Created attachment 795493 [details] valgrind libvirt-python lxc domain check
According to the valgrind report I uploaded for libvirt-1.1.1-4.el7, there is no memory leak for libvirt. So I change the status to VERIFIED.
This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request.