Bug 984578
Summary: | libvirtd leaks URI on destination when migrating | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Chris Pelland <cpelland> | ||||||
Component: | libvirt | Assignee: | Jiri Denemark <jdenemar> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | high | ||||||||
Version: | 6.4 | CC: | acathrow, ajia, cpelland, cwei, dallan, dyuan, jdenemar, jmiao, jsvarova, jtomko, pm-eus, weizhan, xuzhang, ydu, zpeng | ||||||
Target Milestone: | rc | Keywords: | ZStream | ||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | libvirt-0.10.2-18.el6_4.10 | Doc Type: | Bug Fix | ||||||
Doc Text: |
When migrating, libvirtd leaked migration URI (Uniform Resource Identifier) on destination. A patch has been provided to fix this bug and the migration URI is now freed correctly.
|
Story Points: | --- | ||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2013-09-19 18:08:23 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 977961 | ||||||||
Bug Blocks: | |||||||||
Attachments: |
|
Description
Chris Pelland
2013-07-15 13:42:30 UTC
The uri leak on qemuMigrationPrepareDirect() is fixed. But there are other leaks: 1. ==6668== 0 bytes in 1 blocks are definitely lost in loss record 3 of 1,619 ==6668== at 0x4A0577B: calloc (vg_replace_malloc.c:593) ==6668== by 0x4E7576B: virAllocN (memory.c:128) ==6668== by 0x4F5B045: virNetServerProgramDispatch (virnetserverprogram.c:409) ==6668== by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170) ==6668== by 0x4F5CA3B: virNetServerHandleJob (virnetserver.c:191) ==6668== by 0x4E8038B: virThreadPoolWorker (threadpool.c:144) ==6668== by 0x4E7FC78: virThreadHelper (threads-pthread.c:161) ==6668== by 0x3053607850: start_thread (in /lib64/libpthread-2.12.so) ==6668== by 0x30532E890C: clone (in /lib64/libc-2.12.so) 2. ==6668== 80 bytes in 1 blocks are definitely lost in loss record 908 of 1,619 ==6668== at 0x4A0577B: calloc (vg_replace_malloc.c:593) ==6668== by 0x4E7578D: virAlloc (memory.c:100) ==6668== by 0x4E8C577: virLastErrorObject (virterror.c:204) ==6668== by 0x4E8CEB8: virResetLastError (virterror.c:355) ... ==6668== by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170) ==6668== by 0x4F5CA3B: virNetServerHandleJob (virnetserver.c:191) ==6668== by 0x4E8038B: virThreadPoolWorker (threadpool.c:144) ==6668== by 0x4E7FC78: virThreadHelper (threads-pthread.c:161) ==6668== by 0x3053607850: start_thread (in /lib64/libpthread-2.12.so) 3. ==6668== 188 (32 direct, 156 indirect) bytes in 1 blocks are definitely lost in loss record 1,067 of 1,619 ==6668== at 0x4A0577B: calloc (vg_replace_malloc.c:593) ==6668== by 0x4E7578D: virAlloc (memory.c:100) ==6668== by 0x4E80092: virThreadPoolSendJob (threadpool.c:340) ==6668== by 0x4F5C4D6: virNetServerDispatchNewMessage (virnetserver.c:245) ==6668== by 0x4F5DFB3: virNetServerClientDispatchRead (virnetserverclient.c:912) ==6668== by 0x4F5E0CC: virNetServerClientDispatchEvent (virnetserverclient.c:1098) ==6668== by 0x4E6DFFE: virEventPollRunOnce (event_poll.c:485) ==6668== by 0x4E6CD96: virEventRunDefaultImpl (event.c:247) ==6668== by 0x4F5BBDC: virNetServerRun (virnetserver.c:748) ==6668== by 0x423716: main (libvirtd.c:1228) 4. ==6668== 968 bytes in 1 blocks are definitely lost in loss record 1,425 of 1,619 ==6668== at 0x4A069EE: malloc (vg_replace_malloc.c:270) ==6668== by 0x305C6A4254: xmlGetGlobalState (in /usr/lib64/libxml2.so.2.7.6) ==6668== by 0x305C6A3394: __xmlGenericError (in /usr/lib64/libxml2.so.2.7.6) ==6668== by 0x305C6E7956: xmlRelaxNGNewParserCtxt (in /usr/lib64/libxml2.so.2.7.6) ==6668== by 0x3056207C2C: ??? (in /usr/lib64/libnetcf.so.1.4.0) ==6668== by 0x30562049A5: ncf_init (in /usr/lib64/libnetcf.so.1.4.0) ==6668== by 0x4F22D8: interfaceOpenInterface (interface_backend_netcf.c:141) ==6668== by 0x4F0C32C: do_open (libvirt.c:1212) ==6668== by 0x4F0CFBA: virConnectOpen (libvirt.c:1333) ==6668== by 0x440577: remoteDispatchOpenHelper (remote.c:757) ==6668== by 0x4F5B0B1: virNetServerProgramDispatch (virnetserverprogram.c:431) ==6668== by 0x4F5C39D: virNetServerProcessMsg (virnetserver.c:170) 5. ==7115== 42 (32 direct, 10 indirect) bytes in 1 blocks are definitely lost in loss record 739 of 1,635 ==7115== at 0x4A0577B: calloc (vg_replace_malloc.c:593) ==7115== by 0x4E7578D: virAlloc (memory.c:100) ==7115== by 0x4746D7: qemuMigrationEatCookie (qemu_migration.c:475) ==7115== by 0x475510: qemuMigrationRun (qemu_migration.c:1964) ==7115== by 0x47629B: doNativeMigrate (qemu_migration.c:2184) ==7115== by 0x479FAA: qemuMigrationPerform (qemu_migration.c:2853) ==7115== by 0x4542E2: qemuDomainMigratePerform3 (qemu_driver.c:10107) ==7115== by 0x4F11497: virDomainMigratePerform3 (libvirt.c:6253) ==7115== by 0x42DE01: remoteDispatchDomainMigratePerform3Helper (remote.c:3593) do these leaks matter? Or are they mis-report from valgrind? Created attachment 794670 [details]
libvirt valgrind log
Are these leaks new or were they present in the previous package (libvirt-0.10.2-18.el6_4.9) too? Leak 1, 3 is new, leak 2, 4, 5 is existing in previous package (libvirt-0.10.2-18.el6_4.9). Even in libvirt-0.10.2-20.el6, there are leak 2, 4, 5. see https://bugzilla.redhat.com/show_bug.cgi?id=977961#c14 Created attachment 795477 [details] libvirt-0.10.2-18.el6_4.9 valgrind report This is similar with https://bugzilla.redhat.com/show_bug.cgi?id=977961#c14 Jiri, what do you think about these leakages? The reported leaks do not seem to be real issues. All right, this bug is reported to qemuMigrationPrepareDirect memory leak, and is fixed in libvirt-0.10.2-18.el6_4.10. So I choose to change status to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-1272.html |