RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 984578 - libvirtd leaks URI on destination when migrating
Summary: libvirtd leaks URI on destination when migrating
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 977961
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-15 13:42 UTC by Chris Pelland
Modified: 2013-09-19 18:08 UTC (History)
15 users (show)

Fixed In Version: libvirt-0.10.2-18.el6_4.10
Doc Type: Bug Fix
Doc Text:
When migrating, libvirtd leaked migration URI (Uniform Resource Identifier) on destination. A patch has been provided to fix this bug and the migration URI is now freed correctly.
Clone Of:
Environment:
Last Closed: 2013-09-19 18:08:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirt valgrind log (105.71 KB, text/plain)
2013-09-06 11:33 UTC, Jincheng Miao
no flags Details
libvirt-0.10.2-18.el6_4.9 valgrind report (25.36 KB, text/plain)
2013-09-09 03:22 UTC, Jincheng Miao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1272 0 normal SHIPPED_LIVE Important: libvirt security and bug fix update 2013-09-19 22:02:43 UTC

Description Chris Pelland 2013-07-15 13:42:30 UTC
This bug has been copied from bug #977961 and has been proposed
to be backported to 6.4 z-stream (EUS).

Comment 6 Jincheng Miao 2013-09-06 11:32:42 UTC
The uri leak on qemuMigrationPrepareDirect() is fixed. But there are other  leaks: 
1. 
==6668== 0 bytes in 1 blocks are definitely lost in loss record 3 of 1,619
==6668==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==6668==    by 0x4E7576B: virAllocN (memory.c:128)
==6668==    by 0x4F5B045: virNetServerProgramDispatch (virnetserverprogram.c:409)
==6668==    by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170)
==6668==    by 0x4F5CA3B: virNetServerHandleJob (virnetserver.c:191)
==6668==    by 0x4E8038B: virThreadPoolWorker (threadpool.c:144)
==6668==    by 0x4E7FC78: virThreadHelper (threads-pthread.c:161)
==6668==    by 0x3053607850: start_thread (in /lib64/libpthread-2.12.so)
==6668==    by 0x30532E890C: clone (in /lib64/libc-2.12.so)

2.
==6668== 80 bytes in 1 blocks are definitely lost in loss record 908 of 1,619
==6668==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==6668==    by 0x4E7578D: virAlloc (memory.c:100)
==6668==    by 0x4E8C577: virLastErrorObject (virterror.c:204)
==6668==    by 0x4E8CEB8: virResetLastError (virterror.c:355)
...
==6668==    by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170)
==6668==    by 0x4F5CA3B: virNetServerHandleJob (virnetserver.c:191)
==6668==    by 0x4E8038B: virThreadPoolWorker (threadpool.c:144)
==6668==    by 0x4E7FC78: virThreadHelper (threads-pthread.c:161)
==6668==    by 0x3053607850: start_thread (in /lib64/libpthread-2.12.so)

3.
==6668== 188 (32 direct, 156 indirect) bytes in 1 blocks are definitely lost in loss record 1,067 of 1,619
==6668==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==6668==    by 0x4E7578D: virAlloc (memory.c:100)
==6668==    by 0x4E80092: virThreadPoolSendJob (threadpool.c:340)
==6668==    by 0x4F5C4D6: virNetServerDispatchNewMessage (virnetserver.c:245)
==6668==    by 0x4F5DFB3: virNetServerClientDispatchRead (virnetserverclient.c:912)
==6668==    by 0x4F5E0CC: virNetServerClientDispatchEvent (virnetserverclient.c:1098)
==6668==    by 0x4E6DFFE: virEventPollRunOnce (event_poll.c:485)
==6668==    by 0x4E6CD96: virEventRunDefaultImpl (event.c:247)
==6668==    by 0x4F5BBDC: virNetServerRun (virnetserver.c:748)
==6668==    by 0x423716: main (libvirtd.c:1228)

4.
==6668== 968 bytes in 1 blocks are definitely lost in loss record 1,425 of 1,619
==6668==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==6668==    by 0x305C6A4254: xmlGetGlobalState (in /usr/lib64/libxml2.so.2.7.6)
==6668==    by 0x305C6A3394: __xmlGenericError (in /usr/lib64/libxml2.so.2.7.6)
==6668==    by 0x305C6E7956: xmlRelaxNGNewParserCtxt (in /usr/lib64/libxml2.so.2.7.6)
==6668==    by 0x3056207C2C: ??? (in /usr/lib64/libnetcf.so.1.4.0)
==6668==    by 0x30562049A5: ncf_init (in /usr/lib64/libnetcf.so.1.4.0)
==6668==    by 0x4F22D8: interfaceOpenInterface (interface_backend_netcf.c:141)
==6668==    by 0x4F0C32C: do_open (libvirt.c:1212)
==6668==    by 0x4F0CFBA: virConnectOpen (libvirt.c:1333)
==6668==    by 0x440577: remoteDispatchOpenHelper (remote.c:757)
==6668==    by 0x4F5B0B1: virNetServerProgramDispatch (virnetserverprogram.c:431)
==6668==    by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170)

5.
==7115== 42 (32 direct, 10 indirect) bytes in 1 blocks are definitely lost in loss record 739 of 1,635
==7115==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==7115==    by 0x4E7578D: virAlloc (memory.c:100)
==7115==    by 0x4746D7: qemuMigrationEatCookie (qemu_migration.c:475)
==7115==    by 0x475510: qemuMigrationRun (qemu_migration.c:1964)
==7115==    by 0x47629B: doNativeMigrate (qemu_migration.c:2184)
==7115==    by 0x479FAA: qemuMigrationPerform (qemu_migration.c:2853)
==7115==    by 0x4542E2: qemuDomainMigratePerform3 (qemu_driver.c:10107)
==7115==    by 0x4F11497: virDomainMigratePerform3 (libvirt.c:6253)
==7115==    by 0x42DE01: remoteDispatchDomainMigratePerform3Helper (remote.c:3593)

do these leaks matter? Or are they mis-report from valgrind?

Comment 7 Jincheng Miao 2013-09-06 11:33:24 UTC
Created attachment 794670 [details]
libvirt valgrind log

Comment 8 Jiri Denemark 2013-09-06 13:53:59 UTC
Are these leaks new or were they present in the previous package (libvirt-0.10.2-18.el6_4.9) too?

Comment 9 Jincheng Miao 2013-09-09 03:20:27 UTC
Leak 1, 3 is new, leak 2, 4, 5 is existing in previous package (libvirt-0.10.2-18.el6_4.9).

Even in libvirt-0.10.2-20.el6, there are leak 2, 4, 5.
see https://bugzilla.redhat.com/show_bug.cgi?id=977961#c14

Comment 10 Jincheng Miao 2013-09-09 03:22:09 UTC
Created attachment 795477 [details]
libvirt-0.10.2-18.el6_4.9 valgrind report

This is similar with https://bugzilla.redhat.com/show_bug.cgi?id=977961#c14

Comment 11 Jincheng Miao 2013-09-10 02:40:45 UTC
Jiri, what do you think about these leakages?

Comment 12 Jiri Denemark 2013-09-10 07:23:30 UTC
The reported leaks do not seem to be real issues.

Comment 13 Jincheng Miao 2013-09-10 08:06:38 UTC
All right, this bug is reported to qemuMigrationPrepareDirect memory leak, and is fixed in libvirt-0.10.2-18.el6_4.10. So I choose to change status to VERIFIED.

Comment 15 errata-xmlrpc 2013-09-19 18:08:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1272.html


Note You need to log in before you can comment on or make changes to this bug.