RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1445627 - Memory is leaked after re-starting a VM
Summary: Memory is leaked after re-starting a VM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Fangge Jin
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-04-26 07:33 UTC by Peter Krempa
Modified: 2017-09-25 06:09 UTC (History)
5 users (show)

Fixed In Version: libvirt-3.2.0-4.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-02 00:08:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description Peter Krempa 2017-04-26 07:33:31 UTC
Description of problem:

The following pointers are leaked after a restart of a VM:
priv->autoNodeset
priv->autoCpuset
priv->usbaddrs
priv->migTLSAlias
priv->migSecinfo

Found by code inspection and a valgrind run.

Note that some of them require specific configurations for the leaks to show.

First two require automatic VM placement.

priv->usbaddrs requires an USB device

priv->mig* require the VM being migrated using TLS without undefining it on the source.

Comment 2 Peter Krempa 2017-04-28 07:47:18 UTC
Turns out that priv->migSecinfo is properly cleared on all migration paths, so that it's not leaked.

The remaining issues are fixed by:
commit 8c1fee5f12e607a775199b65915715bb5a2b581d
Author: Peter Krempa <pkrempa>
Date:   Wed Apr 26 12:46:03 2017 +0200

    qemu: process: Clean up priv->migTLSAlias
    
    The alias would be leaked, since it's not freed on the vm stop path.

commit 3ab802d689796ebac6545267d5db248e13a9a0e6
Author: Peter Krempa <pkrempa>
Date:   Wed Apr 26 09:57:39 2017 +0200

    qemu: process: Don't leak priv->usbaddrs after VM restart
    
    Since the private data structure is not freed upon stopping a VM, the
    usbaddrs pointer would be leaked:
    
    ==15388== 136 (16 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 893 of 1,019
    ==15388==    at 0x4C2CF55: calloc (vg_replace_malloc.c:711)
    ==15388==    by 0x54BF64A: virAlloc (viralloc.c:144)
    ==15388==    by 0x5547588: virDomainUSBAddressSetCreate (domain_addr.c:1608)
    ==15388==    by 0x144D38A2: qemuDomainAssignUSBAddresses (qemu_domain_address.c:2458)
    ==15388==    by 0x144D38A2: qemuDomainAssignAddresses (qemu_domain_address.c:2515)
    ==15388==    by 0x144ED1E3: qemuProcessPrepareDomain (qemu_process.c:5398)
    ==15388==    by 0x144F51FF: qemuProcessStart (qemu_process.c:5979)
    [...]

commit 1730cdc665a499afc28683a4ce21493f967411b7
Author: Peter Krempa <pkrempa>
Date:   Tue Apr 25 15:17:34 2017 +0200

    qemu: process: Clean automatic NUMA/cpu pinning information on shutdown
    
    Clean the stale data after shutting down the VM. Otherwise the data
    would be leaked on next VM start. This happens due to the fact that the
    private data object is not freed on destroy of the VM.

Comment 5 Fangge Jin 2017-05-24 09:44:20 UTC
Reproduce with build libvirt-3.2.0-3.virtcov.el7.x86_64

Steps:
0.Prepare a guest with usb device and vcpu placement=auto:
# virsh dumpxml rhel7.4
...
  <vcpu placement='auto'>1</vcpu>
...
    <hub type='usb'>
      <address type='usb' bus='0' port='1'/>
    </hub>
...

1. # valgrind --leak-check=full --trace-children=no libvirtd

2. # virsh start rhel7.4

3. # virsh destroy rhel7.4

4. # virsh start rhel7.4

5. Check valgrind output:
priv->autoCpuset
==6267== 40 (32 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 451 of 731
==6267==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==6267==    by 0x556A3BB: virAlloc (viralloc.c:144)
==6267==    by 0x556CA74: virBitmapNewQuiet (virbitmap.c:77)
==6267==    by 0x556CB30: virBitmapNew (virbitmap.c:106)
==6267==    by 0x5625411: virCapabilitiesGetCpusForNodemask (capabilities.c:1135)
==6267==    by 0x32530D6C: qemuProcessPrepareDomain (qemu_process.c:5382)
==6267==    by 0x32538CFA: qemuProcessStart (qemu_process.c:5977)
==6267==    by 0x325BE66E: qemuDomainObjStart.constprop.41 (qemu_driver.c:6958)
==6267==    by 0x325BF1DD: qemuDomainCreateWithFlags (qemu_driver.c:7012)
==6267==    by 0x325BF2D2: qemuDomainCreate (qemu_driver.c:7031)
==6267==    by 0x5700DCF: virDomainCreate (libvirt-domain.c:6532)
==6267==    by 0x17C4ED: remoteDispatchDomainCreate (remote_dispatch.h:4222)
==6267==    by 0x17C4ED: remoteDispatchDomainCreateHelper (remote_dispatch.h:4198)
==6267== 

priv->autoNodeset
==6267== 160 (32 direct, 128 indirect) bytes in 1 blocks are definitely lost in loss record 567 of 731
==6267==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==6267==    by 0x556A3BB: virAlloc (viralloc.c:144)
==6267==    by 0x556CA74: virBitmapNewQuiet (virbitmap.c:77)
==6267==    by 0x556CB30: virBitmapNew (virbitmap.c:106)
==6267==    by 0x556CFDB: virBitmapParseSeparator (virbitmap.c:436)
==6267==    by 0x556D3F8: virBitmapParse (virbitmap.c:542)
==6267==    by 0x32530D43: qemuProcessPrepareDomain (qemu_process.c:5378)
==6267==    by 0x32538CFA: qemuProcessStart (qemu_process.c:5977)
==6267==    by 0x325BE66E: qemuDomainObjStart.constprop.41 (qemu_driver.c:6958)
==6267==    by 0x325BF1DD: qemuDomainCreateWithFlags (qemu_driver.c:7012)
==6267==    by 0x325BF2D2: qemuDomainCreate (qemu_driver.c:7031)
==6267==    by 0x5700DCF: virDomainCreate (libvirt-domain.c:6532)
==6267== 

priv->usbaddrs
==6267== 352 (16 direct, 336 indirect) bytes in 1 blocks are definitely lost in loss record 661 of 731
==6267==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==6267==    by 0x556A3BB: virAlloc (viralloc.c:144)
==6267==    by 0x5629F6A: virDomainUSBAddressSetCreate (domain_addr.c:1608)
==6267==    by 0x3250AF3D: qemuDomainAssignUSBAddresses (qemu_domain_address.c:2458)
==6267==    by 0x3250AF3D: qemuDomainAssignAddresses (qemu_domain_address.c:2515)
==6267==    by 0x3252FE00: qemuProcessPrepareDomain (qemu_process.c:5396)
==6267==    by 0x32538CFA: qemuProcessStart (qemu_process.c:5977)
==6267==    by 0x325BE66E: qemuDomainObjStart.constprop.41 (qemu_driver.c:6958)
==6267==    by 0x325BF1DD: qemuDomainCreateWithFlags (qemu_driver.c:7012)
==6267==    by 0x325BF2D2: qemuDomainCreate (qemu_driver.c:7031)
==6267==    by 0x5700DCF: virDomainCreate (libvirt-domain.c:6532)
==6267==    by 0x17C4ED: remoteDispatchDomainCreate (remote_dispatch.h:4222)
==6267==    by 0x17C4ED: remoteDispatchDomainCreateHelper (remote_dispatch.h:4198)
==6267==    by 0x5794F1B: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==6267==    by 0x5794F1B: virNetServerProgramDispatch (virnetserverprogram.c:307)

6. For priv->migTLSAlias, the steps are:
start guest->do tls migration->destroy guest on target, start guest on source host-> do tls migration->Terminate valgrind by "Ctrl+C"

Then you can see memory leak in output of valgrind:
==6267== 1 bytes in 1 blocks are definitely lost in loss record 2 of 731
==6267==    at 0x4C29BE3: malloc (vg_replace_malloc.c:299)
==6267==    by 0x8861949: strdup (in /usr/lib64/libc-2.17.so)
==6267==    by 0x5603B12: virStrdup (virstring.c:784)
==6267==    by 0x325732AF: qemuMonitorJSONGetMigrationParams (qemu_monitor_json.c:2690)
==6267==    by 0x3255880C: qemuMonitorGetMigrationParams (qemu_monitor.c:2552)
==6267==    by 0x3253E5B7: qemuMigrationCheckTLSCreds (qemu_migration.c:113)
==6267==    by 0x3253E683: qemuMigrationCheckSetupTLS.isra.12 (qemu_migration.c:162)
==6267==    by 0x325439C5: qemuMigrationBegin (qemu_migration.c:2091)
==6267==    by 0x3258E1FE: qemuDomainMigrateBegin3Params (qemu_driver.c:11980)
==6267==    by 0x56FA15D: virDomainMigrateBegin3Params (libvirt-domain.c:4836)
==6267==    by 0x1538CC: remoteDispatchDomainMigrateBegin3Params (remote.c:5260)
==6267==    by 0x1538CC: remoteDispatchDomainMigrateBegin3ParamsHelper (remote_dispatch.h:7302)
==6267==    by 0x5794F1B: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==6267==    by 0x5794F1B: virNetServerProgramDispatch (virnetserverprogram.c:307)
==6267== 
==6267== 8 bytes in 1 blocks are definitely lost in loss record 71 of 731
==6267==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==6267==    by 0x556A444: virAllocN (viralloc.c:191)
==6267==    by 0x5586B7B: virConfGetValueStringList (virconf.c:1001)
==6267==    by 0x325266B8: virQEMUDriverConfigLoadFile (qemu_conf.c:835)
==6267==    by 0x325852DB: qemuStateInitialize (qemu_driver.c:664)
==6267==    by 0x56E606E: virStateInitialize (libvirt.c:770)
==6267==    by 0x143792: daemonRunStateInit (libvirtd.c:881)
==6267==    by 0x560D104: virThreadHelper (virthread.c:206)
==6267==    by 0x85C6E24: start_thread (in /usr/lib64/libpthread-2.17.so)
==6267==    by 0x88D334C: clone (in /usr/lib64/libc-2.17.so)

Comment 6 Fangge Jin 2017-05-24 09:46:42 UTC
Verify with build libvirt-3.2.0-5.virtcov.el7.x86_64

Use same steps as comment 5, the four memory leaks are fixed.

Comment 7 Fangge Jin 2017-05-24 11:36:15 UTC
Hi Peter

There are some other memory leaks that you may want to fix as well:

1) start a guest:
==9189== 80 bytes in 1 blocks are definitely lost in loss record 1,548 of 2,485
==9189==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==9189==    by 0x556C1CB: virAlloc (viralloc.c:144)
==9189==    by 0x5593C13: virLastErrorObject (virerror.c:244)
==9189==    by 0x5596280: virResetLastError (virerror.c:416)
==9189==    by 0x571DEE5: virConnectRegisterCloseCallback (libvirt-host.c:1220)
==9189==    by 0x14D23D: remoteDispatchConnectRegisterCloseCallback (remote.c:3871)
==9189==    by 0x14D23D: remoteDispatchConnectRegisterCloseCallbackHelper (remote_dispatch.h:3138)
==9189==    by 0x5797D7B: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==9189==    by 0x5797D7B: virNetServerProgramDispatch (virnetserverprogram.c:307)
==9189==    by 0x1977E9: virNetServerProcessMsg (virnetserver.c:148)
==9189==    by 0x197BE7: virNetServerHandleJob (virnetserver.c:169)
==9189==    by 0x56104C0: virThreadPoolWorker (virthreadpool.c:167)
==9189==    by 0x560F28F: virThreadHelper (virthread.c:206)
==9189==    by 0x87FBE24: start_thread (in /usr/lib64/libpthread-2.17.so)
==9189==

2)Do migration:
==9259== 64 (56 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 1,532 of 2,606
==9259==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==9259==    by 0x556C83C: virAllocVar (viralloc.c:560)
==9259==    by 0x55E292D: virObjectNew (virobject.c:193)
==9259==    by 0x56E540C: virGetDomain (datatypes.c:282)
==9259==    by 0x169631: get_nonnull_domain (remote.c:6968)
==9259==    by 0x169631: remoteDispatchDomainGetJobInfo (remote_dispatch.h:5640)
==9259==    by 0x169631: remoteDispatchDomainGetJobInfoHelper (remote_dispatch.h:5617)
==9259==    by 0x5797D7B: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==9259==    by 0x5797D7B: virNetServerProgramDispatch (virnetserverprogram.c:307)
==9259==    by 0x1977E9: virNetServerProcessMsg (virnetserver.c:148)
==9259==    by 0x197BE7: virNetServerHandleJob (virnetserver.c:169)
==9259==    by 0x56104C0: virThreadPoolWorker (virthreadpool.c:167)
==9259==    by 0x560F28F: virThreadHelper (virthread.c:206)
==9259==    by 0x87FBE24: start_thread (in /usr/lib64/libpthread-2.17.so)
==9259==    by 0x8B0834C: clone (in /usr/lib64/libc-2.17.so)

==9259== 96 bytes in 1 blocks are definitely lost in loss record 1,719 of 2,606
==9259==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==9259==    by 0x556C254: virAllocN (viralloc.c:191)
==9259==    by 0x5797CD9: virNetServerProgramDispatchCall (virnetserverprogram.c:415)
==9259==    by 0x5797CD9: virNetServerProgramDispatch (virnetserverprogram.c:307)
==9259==    by 0x1977E9: virNetServerProcessMsg (virnetserver.c:148)
==9259==    by 0x197BE7: virNetServerHandleJob (virnetserver.c:169)
==9259==    by 0x56104C0: virThreadPoolWorker (virthreadpool.c:167)
==9259==    by 0x560F28F: virThreadHelper (virthread.c:206)
==9259==    by 0x87FBE24: start_thread (in /usr/lib64/libpthread-2.17.so)
==9259==    by 0x8B0834C: clone (in /usr/lib64/libc-2.17.so)

==9259== 220 (32 direct, 188 indirect) bytes in 1 blocks are definitely lost in loss record 1,983 of 2,606
==9259==    at 0x4C2B975: calloc (vg_replace_malloc.c:711)
==9259==    by 0x556C1CB: virAlloc (viralloc.c:144)
==9259==    by 0x5610E4C: virThreadPoolSendJob (virthreadpool.c:395)
==9259==    by 0x197ACB: virNetServerDispatchNewMessage (virnetserver.c:221)
==9259==    by 0x19AD26: virNetServerClientDispatchRead (virnetserverclient.c:1268)
==9259==    by 0x19B7A1: virNetServerClientDispatchEvent (virnetserverclient.c:1457)
==9259==    by 0x1A1F27: virNetSocketEventHandle (virnetsocket.c:2133)
==9259==    by 0x55999EE: virEventPollDispatchHandles (vireventpoll.c:508)
==9259==    by 0x55999EE: virEventPollRunOnce (vireventpoll.c:657)
==9259==    by 0x55976A9: virEventRunDefaultImpl (virevent.c:314)
==9259==    by 0x578F264: virNetDaemonRun (virnetdaemon.c:818)
==9259==    by 0x14476A: main (libvirtd.c:1541)

Comment 9 errata-xmlrpc 2017-08-02 00:08:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 10 errata-xmlrpc 2017-08-02 01:32:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.