Bug 978352 - libvirtd leaks memory in virCgroupMoveTask
Summary: libvirtd leaks memory in virCgroupMoveTask
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.4
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
Depends On:
Blocks: 984556
TreeView+ depends on / blocked
Reported: 2013-06-26 12:40 UTC by Ján Tomko
Modified: 2013-11-21 09:04 UTC (History)
10 users (show)

Fixed In Version: libvirt-0.10.2-19.el6
Doc Type: Bug Fix
Doc Text:
Prior to this update, the libvirtd daemon leaked memory in the virCgroupMoveTask() function. A fix has been provided which prevents libvirtd from incorrect management of memory allocations.
Clone Of:
Last Closed: 2013-11-21 09:04:11 UTC
Target Upstream Version:

Attachments (Terms of Use)
valgrind msg (25.43 KB, text/plain)
2013-07-15 03:31 UTC, zhpeng
no flags Details

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1581 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2013-11-21 01:11:35 UTC

Description Ján Tomko 2013-06-26 12:40:45 UTC
Description of problem:
libvirtd leaks memory in virCgroupMoveTask

Version-Release number of selected component (if applicable):

How reproducible:
100 %

Steps to Reproduce:
1. run libvirtd under valgrind:
valgrind --leak-check=full libvirtd
2. create a domain:
virsh create /dev/stdin <<EOF
<domain type='qemu'>
  <memory unit='MiB'>32</memory>
    <type arch='x86_64' machine='pc'>hvm</type>
Domain duck created from /dev/stdin

Actual results:
Valgrind shows a memory leak:

==4945== 16,386 bytes in 2 blocks are definitely lost in loss record 715 of 722
==4945==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==4945==    by 0x4A06B62: realloc (vg_replace_malloc.c:662)
==4945==    by 0x4E787CB: virReallocN (memory.c:160)
==4945==    by 0x4E87A89: virFileReadLimFD (util.c:400)
==4945==    by 0x4E87BF7: virFileReadAll (util.c:461)
==4945==    by 0x4E6D140: virCgroupGetValueStr (cgroup.c:363)
==4945==    by 0x4E6DD7D: virCgroupMoveTask (cgroup.c:897)
==4945==    by 0x471328: qemuSetupCgroupForEmulator (qemu_cgroup.c:695)
==4945==    by 0x48648E: qemuProcessStart (qemu_process.c:3965)
==4945==    by 0x461678: qemudDomainCreate (qemu_driver.c:1595)
==4945==    by 0x4F0A580: virDomainCreateXML (libvirt.c:1954)
==4945==    by 0x43FBD0: remoteDispatchDomainCreateXMLHelper (remote_dispatch.h:1172)

Expected results:
libvirtd frees the memory.

Comment 2 Ján Tomko 2013-06-26 13:46:32 UTC
Fixed upstream:
commit 5bc8ecb8d1170f41d4c177c1cf0e87c54194a3a3
Author:     Ján Tomko <jtomko>
AuthorDate: 2013-06-26 14:55:27 +0200
Commit:     Ján Tomko <jtomko>
CommitDate: 2013-06-26 15:38:01 +0200

    Plug leak in virCgroupMoveTask
    We only break out of the while loop if *content is an empty string.
    However the buffer has been allocated to BUFSIZ + 1 (8193 in my case),
    but it gets overwritten in the next for iteration.
    Move VIR_FREE right before we overwrite it to avoid the leak.
    ==5777== 16,386 bytes in 2 blocks are definitely lost in loss record 1,022 of 1,027
    ==5777==    by 0x5296E28: virReallocN (viralloc.c:184)
    ==5777==    by 0x52B0C66: virFileReadLimFD (virfile.c:1137)
    ==5777==    by 0x52B0E1A: virFileReadAll (virfile.c:1199)
    ==5777==    by 0x529B092: virCgroupGetValueStr (vircgroup.c:534)
    ==5777==    by 0x529AF64: virCgroupMoveTask (vircgroup.c:1079)
    Introduced by 83e4c77.

git describe: v1.1.0-rc1-26-g5bc8ecb

Downstream patch posted too:

Comment 9 zhpeng 2013-07-15 03:31:19 UTC
Created attachment 773495 [details]
valgrind msg

Comment 11 Ján Tomko 2013-07-15 08:23:59 UTC
(In reply to zhpeng from comment #9)
None of the leaks in there seem important to me.

Comment 14 errata-xmlrpc 2013-11-21 09:04:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.