Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 984578

Summary: libvirtd leaks URI on destination when migrating
Product: Red Hat Enterprise Linux 6 Reporter: Chris Pelland <cpelland>
Component: libvirtAssignee: Jiri Denemark <jdenemar>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 6.4CC: acathrow, ajia, cpelland, cwei, dallan, dyuan, jdenemar, jmiao, jsvarova, jtomko, pm-eus, weizhan, xuzhang, ydu, zpeng
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-0.10.2-18.el6_4.10 Doc Type: Bug Fix
Doc Text:
When migrating, libvirtd leaked migration URI (Uniform Resource Identifier) on destination. A patch has been provided to fix this bug and the migration URI is now freed correctly.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-19 18:08:23 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 977961    
Bug Blocks:    
Attachments:
Description Flags
libvirt valgrind log
none
libvirt-0.10.2-18.el6_4.9 valgrind report none

Description Chris Pelland 2013-07-15 13:42:30 UTC
This bug has been copied from bug #977961 and has been proposed
to be backported to 6.4 z-stream (EUS).

Comment 6 Jincheng Miao 2013-09-06 11:32:42 UTC
The uri leak on qemuMigrationPrepareDirect() is fixed. But there are other  leaks: 
1. 
==6668== 0 bytes in 1 blocks are definitely lost in loss record 3 of 1,619
==6668==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==6668==    by 0x4E7576B: virAllocN (memory.c:128)
==6668==    by 0x4F5B045: virNetServerProgramDispatch (virnetserverprogram.c:409)
==6668==    by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170)
==6668==    by 0x4F5CA3B: virNetServerHandleJob (virnetserver.c:191)
==6668==    by 0x4E8038B: virThreadPoolWorker (threadpool.c:144)
==6668==    by 0x4E7FC78: virThreadHelper (threads-pthread.c:161)
==6668==    by 0x3053607850: start_thread (in /lib64/libpthread-2.12.so)
==6668==    by 0x30532E890C: clone (in /lib64/libc-2.12.so)

2.
==6668== 80 bytes in 1 blocks are definitely lost in loss record 908 of 1,619
==6668==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==6668==    by 0x4E7578D: virAlloc (memory.c:100)
==6668==    by 0x4E8C577: virLastErrorObject (virterror.c:204)
==6668==    by 0x4E8CEB8: virResetLastError (virterror.c:355)
...
==6668==    by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170)
==6668==    by 0x4F5CA3B: virNetServerHandleJob (virnetserver.c:191)
==6668==    by 0x4E8038B: virThreadPoolWorker (threadpool.c:144)
==6668==    by 0x4E7FC78: virThreadHelper (threads-pthread.c:161)
==6668==    by 0x3053607850: start_thread (in /lib64/libpthread-2.12.so)

3.
==6668== 188 (32 direct, 156 indirect) bytes in 1 blocks are definitely lost in loss record 1,067 of 1,619
==6668==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==6668==    by 0x4E7578D: virAlloc (memory.c:100)
==6668==    by 0x4E80092: virThreadPoolSendJob (threadpool.c:340)
==6668==    by 0x4F5C4D6: virNetServerDispatchNewMessage (virnetserver.c:245)
==6668==    by 0x4F5DFB3: virNetServerClientDispatchRead (virnetserverclient.c:912)
==6668==    by 0x4F5E0CC: virNetServerClientDispatchEvent (virnetserverclient.c:1098)
==6668==    by 0x4E6DFFE: virEventPollRunOnce (event_poll.c:485)
==6668==    by 0x4E6CD96: virEventRunDefaultImpl (event.c:247)
==6668==    by 0x4F5BBDC: virNetServerRun (virnetserver.c:748)
==6668==    by 0x423716: main (libvirtd.c:1228)

4.
==6668== 968 bytes in 1 blocks are definitely lost in loss record 1,425 of 1,619
==6668==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==6668==    by 0x305C6A4254: xmlGetGlobalState (in /usr/lib64/libxml2.so.2.7.6)
==6668==    by 0x305C6A3394: __xmlGenericError (in /usr/lib64/libxml2.so.2.7.6)
==6668==    by 0x305C6E7956: xmlRelaxNGNewParserCtxt (in /usr/lib64/libxml2.so.2.7.6)
==6668==    by 0x3056207C2C: ??? (in /usr/lib64/libnetcf.so.1.4.0)
==6668==    by 0x30562049A5: ncf_init (in /usr/lib64/libnetcf.so.1.4.0)
==6668==    by 0x4F22D8: interfaceOpenInterface (interface_backend_netcf.c:141)
==6668==    by 0x4F0C32C: do_open (libvirt.c:1212)
==6668==    by 0x4F0CFBA: virConnectOpen (libvirt.c:1333)
==6668==    by 0x440577: remoteDispatchOpenHelper (remote.c:757)
==6668==    by 0x4F5B0B1: virNetServerProgramDispatch (virnetserverprogram.c:431)
==6668==    by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170)

5.
==7115== 42 (32 direct, 10 indirect) bytes in 1 blocks are definitely lost in loss record 739 of 1,635
==7115==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==7115==    by 0x4E7578D: virAlloc (memory.c:100)
==7115==    by 0x4746D7: qemuMigrationEatCookie (qemu_migration.c:475)
==7115==    by 0x475510: qemuMigrationRun (qemu_migration.c:1964)
==7115==    by 0x47629B: doNativeMigrate (qemu_migration.c:2184)
==7115==    by 0x479FAA: qemuMigrationPerform (qemu_migration.c:2853)
==7115==    by 0x4542E2: qemuDomainMigratePerform3 (qemu_driver.c:10107)
==7115==    by 0x4F11497: virDomainMigratePerform3 (libvirt.c:6253)
==7115==    by 0x42DE01: remoteDispatchDomainMigratePerform3Helper (remote.c:3593)

do these leaks matter? Or are they mis-report from valgrind?

Comment 7 Jincheng Miao 2013-09-06 11:33:24 UTC
Created attachment 794670 [details]
libvirt valgrind log

Comment 8 Jiri Denemark 2013-09-06 13:53:59 UTC
Are these leaks new or were they present in the previous package (libvirt-0.10.2-18.el6_4.9) too?

Comment 9 Jincheng Miao 2013-09-09 03:20:27 UTC
Leak 1, 3 is new, leak 2, 4, 5 is existing in previous package (libvirt-0.10.2-18.el6_4.9).

Even in libvirt-0.10.2-20.el6, there are leak 2, 4, 5.
see https://bugzilla.redhat.com/show_bug.cgi?id=977961#c14

Comment 10 Jincheng Miao 2013-09-09 03:22:09 UTC
Created attachment 795477 [details]
libvirt-0.10.2-18.el6_4.9 valgrind report

This is similar with https://bugzilla.redhat.com/show_bug.cgi?id=977961#c14

Comment 11 Jincheng Miao 2013-09-10 02:40:45 UTC
Jiri, what do you think about these leakages?

Comment 12 Jiri Denemark 2013-09-10 07:23:30 UTC
The reported leaks do not seem to be real issues.

Comment 13 Jincheng Miao 2013-09-10 08:06:38 UTC
All right, this bug is reported to qemuMigrationPrepareDirect memory leak, and is fixed in libvirt-0.10.2-18.el6_4.10. So I choose to change status to VERIFIED.

Comment 15 errata-xmlrpc 2013-09-19 18:08:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1272.html