RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2007036 - Memory leak when using dma_read/write with virtio-scsi
Summary: Memory leak when using dma_read/write with virtio-scsi
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.9
Hardware: x86_64
OS: All
urgent
high
Target Milestone: rc
: ---
Assignee: Stefano Garzarella
QA Contact: qing.wang
URL:
Whiteboard:
Depends On: 2016311
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-22 22:16 UTC by Germano Veit Michel
Modified: 2023-03-14 07:41 UTC (History)
18 users (show)

Fixed In Version: qemu-kvm-1.5.3-175.el7_9.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-23 17:17:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/qemu-kvm qemu-kvm merge_requests 53 0 None None None 2021-11-02 14:48:38 UTC
Red Hat Issue Tracker RHELPLAN-97982 0 None None None 2021-09-22 22:18:07 UTC
Red Hat Knowledge Base (Solution) 6407891 0 None None None 2021-10-12 01:26:12 UTC
Red Hat Product Errata RHBA-2021:4797 0 None None None 2021-11-23 17:17:48 UTC

Description Germano Veit Michel 2021-09-22 22:16:13 UTC
Description of problem:

Customer virtual machines memory usage goes up indefinitely depending on amount of IO. 

Look at this pmap for a VM, it has 4G of ram (00007f9279e00000), but has another 33G allocated at 000055f9cafb3000. So a VM with 4G uses 37G :(

Address           Kbytes     RSS   Dirty Mode  Mapping
000055f9cafb3000 33068252 32961588 32961588 rw---   [ anon ]
...
00007f9279e00000 4194304 4044800 4044800 rw---   [ anon ]

We ran qemu-kvm under valgrind and hammered the disk using FIO.

==118156== 78,631,712 (54,073,600 direct, 24,558,112 indirect) bytes in 422,450 blocks are definitely lost in loss record 2,780 of 2,781
==118156==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==118156==    by 0x67E168D: g_malloc (gmem.c:99)
==118156==    by 0x67F8C8D: g_slice_alloc (gslice.c:1025)
==118156==    by 0x1974FE: qemu_aio_get (block.c:4808)
==118156==    by 0x1D7FB3: dma_bdrv_io (dma-helpers.c:208)
==118156==    by 0x1D806C: dma_bdrv_write (dma-helpers.c:239)
==118156==    by 0x2381EC: scsi_write_data (scsi-disk.c:530)
==118156==    by 0x317211: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==118156==    by 0x25EC15: qemu_iohandler_poll (iohandler.c:143)
==118156==    by 0x26318F: main_loop_wait (main-loop.c:476)
==118156==    by 0x18186F: main_loop (vl.c:1997)
==118156==    by 0x18186F: main (vl.c:4367)
==118156==
==118156== 274,216,128 (220,300,416 direct, 53,915,712 indirect) bytes in 1,721,097 blocks are definitely lost in loss record 2,781 of 2,781
==118156==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==118156==    by 0x67E168D: g_malloc (gmem.c:99)
==118156==    by 0x67F8C8D: g_slice_alloc (gslice.c:1025)
==118156==    by 0x1974FE: qemu_aio_get (block.c:4808)
==118156==    by 0x1D7FB3: dma_bdrv_io (dma-helpers.c:208)
==118156==    by 0x1D803C: dma_bdrv_read (dma-helpers.c:231)
==118156==    by 0x2379F0: scsi_do_read (scsi-disk.c:360)
==118156==    by 0x317211: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==118156==    by 0x25EC15: qemu_iohandler_poll (iohandler.c:143)
==118156==    by 0x26318F: main_loop_wait (main-loop.c:476)
==118156==    by 0x18186F: main_loop (vl.c:1997)
==118156==    by 0x18186F: main (vl.c:4367)

It looks like the block aiocb is not freed.

Version-Release number of selected component (if applicable):
qemu-kvm-1.5.3-175.el7_9.4

How reproducible:
Always at customer site

Steps to Reproduce:

Use FIO inside the VM that has disk through librbd.

# cat /root/leak_test.fio 
[global]
randrepeat=0
filename=/root/test.dat
iodepth=8
size=80g
direct=0
ioengine=libaio

[iometer]
stonewall
bs=4M
rw=randrw

[iometer_just_write]
stonewall
bs=4M
rw=write

[iometer_just_read]
stonewall
bs=4M
rw=read

    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='oneadmin'>
        <secret type='ceph' uuid='31c78ac2-40de-49f7-b96c-4dade79785c7'/>
      </auth>
      <source protocol='rbd' name='example/disk'>
        <host name='A.B.C.D' port='6789'/>
        <host name='A.B.C.E' port='6789'/>
        <host name='A.B.C.F' port='6789'/>
      </source>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-1-0'/>
      <address type='drive' controller='0' bus='0' target='1' unit='0'/>
    </disk>

Comment 3 Tingting Mao 2021-09-23 07:23:04 UTC
Tried to reproduce this bug as below, but failed. Could anyone give some points here? Thanks.


Tested env:
qemu-kvm-1.5.3-175.el7_9.4.x86_64
kernel-3.10.0-1160.el7.x86_64


Steps:
1. Boot guest under valgrind
# valgrind sh qemu.sh RHEL-7.9-x86_64-latest.qcow2

Note:
# cat qemu.sh 
/usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-sandbox off  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=$1 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=rbd:rbd/example/disk \
	-device scsi-hd,id=image2,drive=drive_image2 \
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	-smp 16,cores=8,threads=1,sockets=2  \
	-cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+invtsc \
	-vnc :0  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \

2. Exec fio in guest
(guest)# fio leak_test.fio
Note:
(guest)# cat leak_test.fio 
[global]
randrepeat=0
filename=/root/test.dat
iodepth=8
size=80g
direct=0
ioengine=libaio

[iometer]
stonewall
bs=4M
rw=randrw

[iometer_just_write]
stonewall
bs=4M
rw=write

[iometer_just_read]
stonewall
bs=4M
rw=read

3. Exec pmap for the process of qemu-kvm
# pmap 23271
23271:   /usr/libexec/qemu-kvm -name avocado-vt-vm1 -sandbox off -machine pc -nodefaults -vga std -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=RHEL-7.9-x86_64-latest.qcow2 -device scsi-hd,id=image1,drive=drive_image1 -drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=rbd:rbd/example/disk -device scsi-hd,id=image2,drive=drive_image2 -device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=id
0000560b860cb000   3860K r-x-- qemu-kvm
0000560b86690000    836K r---- qemu-kvm
0000560b86761000    296K rw--- qemu-kvm
0000560b867ab000   8664K rw---   [ anon ]
0000560b88453000 160956K rw---   [ anon ]
00007f642a5c1000      4K -----   [ anon ]
00007f642a5c2000   8192K rw---   [ anon ]
00007f642adc2000      4K -----   [ anon ]
00007f642adc3000   8192K rw---   [ anon ]
00007f642b5c3000      4K -----   [ anon ]
00007f642b5c4000   8192K rw---   [ anon ]
00007f642bdc4000      4K -----   [ anon ]
00007f642bdc5000   8192K rw---   [ anon ]
00007f642c5c5000      4K -----   [ anon ]
00007f642c5c6000   8192K rw---   [ anon ]
00007f642cdc6000      4K -----   [ anon ]
00007f642cdc7000   8192K rw---   [ anon ]
00007f642d5c7000      4K -----   [ anon ]
00007f642d5c8000   8192K rw---   [ anon ]
00007f642ddc8000      4K -----   [ anon ]
00007f642ddc9000   8192K rw---   [ anon ]
00007f642e5c9000      4K -----   [ anon ]
00007f642e5ca000   8192K rw---   [ anon ]
00007f642edca000      4K -----   [ anon ]
00007f642edcb000   8192K rw---   [ anon ]
00007f642f5cb000      4K -----   [ anon ]
00007f642f5cc000   8192K rw---   [ anon ]
00007f642fdcc000      4K -----   [ anon ]
00007f642fdcd000   8192K rw---   [ anon ]
00007f64305cd000      4K -----   [ anon ]
00007f64305ce000   8192K rw---   [ anon ]
00007f6430dce000      4K -----   [ anon ]
00007f6430dcf000   8192K rw---   [ anon ]
00007f64315cf000      4K -----   [ anon ]
00007f64315d0000   8192K rw---   [ anon ]
00007f6431dd0000      4K -----   [ anon ]
00007f6431dd1000   8192K rw---   [ anon ]
00007f64325d1000      4K -----   [ anon ]
00007f64325d2000   8192K rw---   [ anon ]
00007f6432dd2000      4K -----   [ anon ]
00007f6432dd3000   8192K rw---   [ anon ]
00007f64335d3000      4K -----   [ anon ]
00007f64335d4000   8192K rw---   [ anon ]
00007f6433dd4000      4K -----   [ anon ]
00007f6433dd5000   8192K rw---   [ anon ]
00007f64345d5000      4K -----   [ anon ]
00007f64345d6000   8192K rw---   [ anon ]
00007f6434dd6000      4K -----   [ anon ]
00007f6434dd7000   8192K rw---   [ anon ]
00007f64355d7000      4K -----   [ anon ]
00007f64355d8000   8192K rw---   [ anon ]
00007f6435dd8000      4K -----   [ anon ]
00007f6435dd9000   8192K rw---   [ anon ]
00007f64365d9000      4K -----   [ anon ]
00007f64365da000   8192K rw---   [ anon ]
00007f6436dda000      4K -----   [ anon ]
00007f6436ddb000   8192K rw---   [ anon ]
00007f64375db000      4K -----   [ anon ]
00007f64375dc000   8192K rw---   [ anon ]
00007f6437ddc000      4K -----   [ anon ]
00007f6437ddd000   8192K rw---   [ anon ]
00007f64385dd000      4K -----   [ anon ]
00007f64385de000   8192K rw---   [ anon ]
00007f6438dde000      4K -----   [ anon ]
00007f6438ddf000   8192K rw---   [ anon ]
00007f64395df000      4K -----   [ anon ]
00007f64395e0000   8192K rw---   [ anon ]
00007f6439de0000      4K -----   [ anon ]
00007f6439de1000   8192K rw---   [ anon ]
00007f643a5e1000      4K -----   [ anon ]
00007f643a5e2000   8192K rw---   [ anon ]
00007f643ade2000      4K -----   [ anon ]
00007f643ade3000   8192K rw---   [ anon ]
00007f643b5e3000      4K -----   [ anon ]
00007f643b5e4000   8192K rw---   [ anon ]
00007f643bde4000      4K -----   [ anon ]
00007f643bde5000   8192K rw---   [ anon ]
00007f643c5e5000      4K -----   [ anon ]
00007f643c5e6000   8192K rw---   [ anon ]
00007f643cde6000      4K -----   [ anon ]
00007f643cde7000   8192K rw---   [ anon ]
00007f643d5e7000      4K -----   [ anon ]
00007f643d5e8000   8192K rw---   [ anon ]
00007f643dde8000      4K -----   [ anon ]
00007f643dde9000   8192K rw---   [ anon ]
00007f643e5e9000      4K -----   [ anon ]
00007f643e5ea000   8192K rw---   [ anon ]
00007f643edea000      4K -----   [ anon ]
00007f643edeb000   8192K rw---   [ anon ]
00007f643f5eb000      4K -----   [ anon ]
00007f643f5ec000   8192K rw---   [ anon ]
00007f643fdec000      4K -----   [ anon ]
00007f643fded000   8192K rw---   [ anon ]
00007f64405ed000      4K -----   [ anon ]
00007f64405ee000   8192K rw---   [ anon ]
00007f6440dee000      4K -----   [ anon ]
00007f6440def000   8192K rw---   [ anon ]
00007f64415ef000      4K -----   [ anon ]
00007f64415f0000   8192K rw---   [ anon ]
00007f6441df0000      4K -----   [ anon ]
00007f6441df1000   8192K rw---   [ anon ]
00007f64425f1000      4K -----   [ anon ]
00007f64425f2000   8192K rw---   [ anon ]
00007f6442df2000      4K -----   [ anon ]
00007f6442df3000   8192K rw---   [ anon ]
00007f64435f3000      4K -----   [ anon ]
00007f64435f4000   8192K rw---   [ anon ]
00007f6443df4000      4K -----   [ anon ]
00007f6443df5000   8192K rw---   [ anon ]
00007f64445f5000      4K -----   [ anon ]
00007f64445f6000   8192K rw---   [ anon ]
00007f6444df6000      4K -----   [ anon ]
00007f6444df7000   8192K rw---   [ anon ]
00007f64455f7000      4K -----   [ anon ]
00007f64455f8000   8192K rw---   [ anon ]
00007f6445df8000      4K -----   [ anon ]
00007f6445df9000   8192K rw---   [ anon ]
00007f64465f9000      4K -----   [ anon ]
00007f64465fa000   8192K rw---   [ anon ]
00007f6446dfa000      4K -----   [ anon ]
00007f6446dfb000   8192K rw---   [ anon ]
00007f64475fb000      4K -----   [ anon ]
00007f64475fc000   8192K rw---   [ anon ]
00007f6447dfc000      4K -----   [ anon ]
00007f6447dfd000   8192K rw---   [ anon ]
00007f64485fd000      4K -----   [ anon ]
00007f64485fe000   8192K rw---   [ anon ]
00007f6448dfe000      4K -----   [ anon ]
00007f6448dff000   8192K rw---   [ anon ]
00007f64495ff000      4K -----   [ anon ]
00007f6449600000   8192K rw---   [ anon ]
00007f6449e00000      4K rw---   [ anon ]
00007f644a000000     12K rw---   [ anon ]
00007f644a035000   1744K r-x-- libdb-5.3.so
00007f644a1e9000   2048K ----- libdb-5.3.so
00007f644a3e9000     28K r---- libdb-5.3.so
00007f644a3f0000     12K rw--- libdb-5.3.so
00007f644a3f3000     24K r-x-- libsasldb.so.3.0.0
00007f644a3f9000   2044K ----- libsasldb.so.3.0.0
00007f644a5f8000      4K r---- libsasldb.so.3.0.0
00007f644a5f9000      4K rw--- libsasldb.so.3.0.0
00007f644a5fa000     16K r-x-- libanonymous.so.3.0.0
00007f644a5fe000   2044K ----- libanonymous.so.3.0.0
00007f644a7fd000      4K r---- libanonymous.so.3.0.0
00007f644a7fe000      4K rw--- libanonymous.so.3.0.0
00007f644a7ff000      4K -----   [ anon ]
00007f644a800000   8192K rw---   [ anon ]
00007f644b000000    256K rw---   [ anon ]
00007f644b200000     64K rw---   [ anon ]
00007f644b400000  16512K rw---   [ anon ]
00007f644c600000    256K rw---   [ anon ]
00007f644c800000 4194304K rw---   [ anon ]
00007f654c9d7000      4K -----   [ anon ]
00007f654c9d8000   8192K rw---   [ anon ]
00007f654d1d8000      4K -----   [ anon ]
00007f654d1d9000   8192K rw---   [ anon ]
00007f654d9d9000      4K -----   [ anon ]
00007f654d9da000   8192K rw---   [ anon ]
00007f654e1da000      4K -----   [ anon ]
00007f654e1db000   8192K rw---   [ anon ]
00007f654e9db000      4K -----   [ anon ]
00007f654e9dc000   8192K rw---   [ anon ]
00007f654f1dc000      4K -----   [ anon ]
00007f654f1dd000   8192K rw---   [ anon ]
00007f654f9dd000      4K -----   [ anon ]
00007f654f9de000   8192K rw---   [ anon ]
00007f65501de000      4K -----   [ anon ]
00007f65501df000   8192K rw---   [ anon ]
00007f65509df000      4K -----   [ anon ]
00007f65509e0000   8192K rw---   [ anon ]
00007f65511e0000      4K -----   [ anon ]
00007f65511e1000   8192K rw---   [ anon ]
00007f65519e1000      4K -----   [ anon ]
00007f65519e2000   8192K rw---   [ anon ]
00007f65521e2000      4K -----   [ anon ]
00007f65521e3000   8192K rw---   [ anon ]
00007f65529e3000      4K -----   [ anon ]
00007f65529e4000   8192K rw---   [ anon ]
00007f65531e4000      4K -----   [ anon ]
00007f65531e5000   8192K rw---   [ anon ]
00007f65539e5000      4K -----   [ anon ]
00007f65539e6000   8192K rw---   [ anon ]
00007f65541e6000      4K -----   [ anon ]
00007f65541e7000   8192K rw---   [ anon ]
00007f65549e7000      4K -----   [ anon ]
00007f65549e8000   8192K rw---   [ anon ]
00007f65551e8000      4K -----   [ anon ]
00007f65551e9000   8192K rw---   [ anon ]
00007f65559e9000      4K -----   [ anon ]
00007f65559ea000   1024K rw---   [ anon ]
00007f6555aea000      4K -----   [ anon ]
00007f6555aeb000   1024K rw---   [ anon ]
00007f6555beb000      4K -----   [ anon ]
00007f6555bec000   8192K rw---   [ anon ]
00007f65563ec000      4K -----   [ anon ]
00007f65563ed000   8192K rw---   [ anon ]
00007f6556bed000      4K -----   [ anon ]
00007f6556bee000   8192K rw---   [ anon ]
00007f65573ee000      4K -----   [ anon ]
00007f65573ef000   1024K rw---   [ anon ]
00007f65574ef000      4K -----   [ anon ]
00007f65574f0000   8192K rw---   [ anon ]
00007f6557cf0000      4K -----   [ anon ]
00007f6557cf1000   8192K rw---   [ anon ]
00007f65584f1000      4K -----   [ anon ]
00007f65584f2000   8192K rw---   [ anon ]
00007f6558cf2000      4K -----   [ anon ]
00007f6558cf3000   8192K rw---   [ anon ]
00007f65594f3000      4K -----   [ anon ]
00007f65594f4000   8192K rw---   [ anon ]
00007f6559cf4000      4K -----   [ anon ]
00007f6559cf5000   8192K rw---   [ anon ]
00007f655a4f5000      4K -----   [ anon ]
00007f655a4f6000   8192K rw---   [ anon ]
00007f655acf6000    524K r-x-- libfreeblpriv3.so
00007f655ad79000   2048K ----- libfreeblpriv3.so
00007f655af79000      8K r---- libfreeblpriv3.so
00007f655af7b000      4K rw--- libfreeblpriv3.so
00007f655af7c000     16K rw---   [ anon ]
00007f655af80000    708K r-x-- libsqlite3.so.0.8.6
00007f655b031000   2044K ----- libsqlite3.so.0.8.6
00007f655b230000      8K r---- libsqlite3.so.0.8.6
00007f655b232000     12K rw--- libsqlite3.so.0.8.6
00007f655b235000    256K r-x-- libsoftokn3.so
00007f655b275000   2048K ----- libsoftokn3.so
00007f655b475000      4K r---- libsoftokn3.so
00007f655b476000      4K rw--- libsoftokn3.so
00007f655b477000      4K -----   [ anon ]
00007f655b478000   8192K rw---   [ anon ]
00007f655bc78000      4K -----   [ anon ]
00007f655bc79000   8192K rw---   [ anon ]
00007f655c479000     92K r-x-- libelf-0.176.so
00007f655c490000   2044K ----- libelf-0.176.so
00007f655c68f000      4K r---- libelf-0.176.so
00007f655c690000      4K rw--- libelf-0.176.so
00007f655c691000     24K r-x-- libogg.so.0.8.0
00007f655c697000   2044K ----- libogg.so.0.8.0
00007f655c896000      4K r---- libogg.so.0.8.0
00007f655c897000      4K rw--- libogg.so.0.8.0
00007f655c898000    176K r-x-- libvorbis.so.0.4.6
00007f655c8c4000   2044K ----- libvorbis.so.0.4.6
00007f655cac3000      4K r---- libvorbis.so.0.4.6
00007f655cac4000      4K rw--- libvorbis.so.0.4.6
00007f655cac5000   2764K r-x-- libvorbisenc.so.2.0.9
00007f655cd78000   2044K ----- libvorbisenc.so.2.0.9
00007f655cf77000    112K r---- libvorbisenc.so.2.0.9
00007f655cf93000      4K rw--- libvorbisenc.so.2.0.9
00007f655cf94000    268K r-x-- libFLAC.so.8.3.0
00007f655cfd7000   2048K ----- libFLAC.so.8.3.0
00007f655d1d7000      4K r---- libFLAC.so.8.3.0
00007f655d1d8000      4K rw--- libFLAC.so.8.3.0
00007f655d1d9000     44K r-x-- libgsm.so.1.0.12
00007f655d1e4000   2044K ----- libgsm.so.1.0.12
00007f655d3e3000      4K r---- libgsm.so.1.0.12
00007f655d3e4000      4K rw--- libgsm.so.1.0.12
00007f655d3e5000     92K r-x-- libnsl-2.17.so
00007f655d3fc000   2044K ----- libnsl-2.17.so
00007f655d5fb000      4K r---- libnsl-2.17.so
00007f655d5fc000      4K rw--- libnsl-2.17.so
00007f655d5fd000      8K rw---   [ anon ]
00007f655d5ff000    148K r-x-- liblzma.so.5.2.2
00007f655d624000   2044K ----- liblzma.so.5.2.2
00007f655d823000      4K r---- liblzma.so.5.2.2
00007f655d824000      4K rw--- liblzma.so.5.2.2
00007f655d825000     60K r-x-- libXi.so.6.1.0
00007f655d834000   2044K ----- libXi.so.6.1.0
00007f655da33000      4K r---- libXi.so.6.1.0
00007f655da34000      4K rw--- libXi.so.6.1.0
00007f655da35000     68K r-x-- libXext.so.6.4.0
00007f655da46000   2044K ----- libXext.so.6.4.0
00007f655dc45000      4K r---- libXext.so.6.4.0
00007f655dc46000      4K rw--- libXext.so.6.4.0
00007f655dc47000      8K r-x-- libXau.so.6.0.0
00007f655dc49000   2048K ----- libXau.so.6.0.0
00007f655de49000      4K r---- libXau.so.6.0.0
00007f655de4a000      4K rw--- libXau.so.6.0.0
00007f655de4b000    312K r-x-- libdw-0.176.so
00007f655de99000   2048K ----- libdw-0.176.so
00007f655e099000      8K r---- libdw-0.176.so
00007f655e09b000      4K rw--- libdw-0.176.so
00007f655e09c000    256K r-x-- libmount.so.1.1.0
00007f655e0dc000   2048K ----- libmount.so.1.1.0
00007f655e2dc000      4K r---- libmount.so.1.1.0
00007f655e2dd000      4K rw--- libmount.so.1.1.0
00007f655e2de000      4K rw---   [ anon ]
00007f655e2df000     12K r-x-- libgmodule-2.0.so.0.5600.1
00007f655e2e2000   2044K ----- libgmodule-2.0.so.0.5600.1
00007f655e4e1000      4K r---- libgmodule-2.0.so.0.5600.1
00007f655e4e2000      4K rw--- libgmodule-2.0.so.0.5600.1
00007f655e4e3000     28K r-x-- libffi.so.6.0.1
00007f655e4ea000   2044K ----- libffi.so.6.0.1
00007f655e6e9000      4K r---- libffi.so.6.0.1
00007f655e6ea000      4K rw--- libffi.so.6.0.1
00007f655e6eb000    144K r-x-- libselinux.so.1
00007f655e70f000   2044K ----- libselinux.so.1
00007f655e90e000      4K r---- libselinux.so.1
00007f655e90f000      4K rw--- libselinux.so.1
00007f655e910000      8K rw---   [ anon ]
00007f655e912000      8K r-x-- libfreebl3.so
00007f655e914000   2044K ----- libfreebl3.so
00007f655eb13000      4K r---- libfreebl3.so
00007f655eb14000      4K rw--- libfreebl3.so
00007f655eb15000     20K r-x-- libasyncns.so.0.3.1
00007f655eb1a000   2044K ----- libasyncns.so.0.3.1
00007f655ed19000      4K r---- libasyncns.so.0.3.1
00007f655ed1a000      4K rw--- libasyncns.so.0.3.1
00007f655ed1b000    352K r-x-- libsndfile.so.1.0.25
00007f655ed73000   2048K ----- libsndfile.so.1.0.25
00007f655ef73000      8K r---- libsndfile.so.1.0.25
00007f655ef75000      4K rw--- libsndfile.so.1.0.25
00007f655ef76000     16K rw---   [ anon ]
00007f655ef7a000     36K r-x-- libwrap.so.0.7.6
00007f655ef83000   2044K ----- libwrap.so.0.7.6
00007f655f182000      4K r---- libwrap.so.0.7.6
00007f655f183000      4K rw--- libwrap.so.0.7.6
00007f655f184000      4K rw---   [ anon ]
00007f655f185000    188K r-x-- libsystemd.so.0.6.0
00007f655f1b4000   2048K ----- libsystemd.so.0.6.0
00007f655f3b4000      4K r---- libsystemd.so.0.6.0
00007f655f3b5000      4K rw--- libsystemd.so.0.6.0
00007f655f3b6000     20K r-x-- libXtst.so.6.1.0
00007f655f3bb000   2044K ----- libXtst.so.6.1.0
00007f655f5ba000      4K r---- libXtst.so.6.1.0
00007f655f5bb000      4K rw--- libXtst.so.6.1.0
00007f655f5bc000     28K r-x-- libSM.so.6.0.1
00007f655f5c3000   2044K ----- libSM.so.6.0.1
00007f655f7c2000      4K r---- libSM.so.6.0.1
00007f655f7c3000      4K rw--- libSM.so.6.0.1
00007f655f7c4000     92K r-x-- libICE.so.6.3.0
00007f655f7db000   2044K ----- libICE.so.6.3.0
00007f655f9da000      4K r---- libICE.so.6.3.0
00007f655f9db000      4K rw--- libICE.so.6.3.0
00007f655f9dc000     16K rw---   [ anon ]
00007f655f9e0000    156K r-x-- libxcb.so.1.1.0
00007f655fa07000   2044K ----- libxcb.so.1.1.0
00007f655fc06000      4K r---- libxcb.so.1.1.0
00007f655fc07000      4K rw--- libxcb.so.1.1.0
00007f655fc08000   1248K r-x-- libX11.so.6.3.0
00007f655fd40000   2048K ----- libX11.so.6.3.0
00007f655ff40000      4K r---- libX11.so.6.3.0
00007f655ff41000     20K rw--- libX11.so.6.3.0
00007f655ff46000      4K r-x-- libX11-xcb.so.1.0.0
00007f655ff47000   2044K ----- libX11-xcb.so.1.0.0
00007f6560146000      4K r---- libX11-xcb.so.1.0.0
00007f6560147000      4K rw--- libX11-xcb.so.1.0.0
00007f6560148000     60K r-x-- libbz2.so.1.0.6
00007f6560157000   2044K ----- libbz2.so.1.0.6
00007f6560356000      4K r---- libbz2.so.1.0.6
00007f6560357000      4K rw--- libbz2.so.1.0.6
00007f6560358000     16K r-x-- libattr.so.1.1.0
00007f656035c000   2044K ----- libattr.so.1.1.0
00007f656055b000      4K r---- libattr.so.1.1.0
00007f656055c000      4K rw--- libattr.so.1.1.0
00007f656055d000     12K r-x-- libkeyutils.so.1.5
00007f6560560000   2044K ----- libkeyutils.so.1.5
00007f656075f000      4K r---- libkeyutils.so.1.5
00007f6560760000      4K rw--- libkeyutils.so.1.5
00007f6560761000     16K r-x-- libgpg-error.so.0.10.0
00007f6560765000   2044K ----- libgpg-error.so.0.10.0
00007f6560964000      4K r---- libgpg-error.so.0.10.0
00007f6560965000      4K rw--- libgpg-error.so.0.10.0
00007f6560966000     80K r-x-- libudev.so.1.6.2
00007f656097a000   2048K ----- libudev.so.1.6.2
00007f6560b7a000      4K r---- libudev.so.1.6.2
00007f6560b7b000      4K rw--- libudev.so.1.6.2
00007f6560b7c000     56K r-x-- liblz4.so.1.8.3
00007f6560b8a000   2044K ----- liblz4.so.1.8.3
00007f6560d89000      4K r---- liblz4.so.1.8.3
00007f6560d8a000      4K rw--- liblz4.so.1.8.3
00007f6560d8b000    268K r-x-- libjpeg.so.62.1.0
00007f6560dce000   2048K ----- libjpeg.so.62.1.0
00007f6560fce000      4K r---- libjpeg.so.62.1.0
00007f6560fcf000      4K rw--- libjpeg.so.62.1.0
00007f6560fd0000     64K rw---   [ anon ]
00007f6560fe0000    316K r-x-- libgobject-2.0.so.0.5600.1
00007f656102f000   2048K ----- libgobject-2.0.so.0.5600.1
00007f656122f000      4K r---- libgobject-2.0.so.0.5600.1
00007f6561230000      4K rw--- libgobject-2.0.so.0.5600.1
00007f6561231000   1624K r-x-- libgio-2.0.so.0.5600.1
00007f65613c7000   2048K ----- libgio-2.0.so.0.5600.1
00007f65615c7000     20K r---- libgio-2.0.so.0.5600.1
00007f65615cc000     12K rw--- libgio-2.0.so.0.5600.1
00007f65615cf000      8K rw---   [ anon ]
00007f65615d1000    256K r-x-- libopus.so.0.3.0
00007f6561611000   2048K ----- libopus.so.0.3.0
00007f6561811000      4K r---- libopus.so.0.3.0
00007f6561812000      4K rw--- libopus.so.0.3.0
00007f6561813000     56K r-x-- libcelt051.so.0.0.0
00007f6561821000   2044K ----- libcelt051.so.0.0.0
00007f6561a20000      4K r---- libcelt051.so.0.0.0
00007f6561a21000      4K rw--- libcelt051.so.0.0.0
00007f6561a22000    120K r-x-- libnl-3.so.200.23.0
00007f6561a40000   2048K ----- libnl-3.so.200.23.0
00007f6561c40000      8K r---- libnl-3.so.200.23.0
00007f6561c42000      4K rw--- libnl-3.so.200.23.0
00007f6561c43000    400K r-x-- libnl-route-3.so.200.23.0
00007f6561ca7000   2044K ----- libnl-route-3.so.200.23.0
00007f6561ea6000     12K r---- libnl-route-3.so.200.23.0
00007f6561ea9000     20K rw--- libnl-route-3.so.200.23.0
00007f6561eae000      8K rw---   [ anon ]
00007f6561eb0000    472K r-x-- libgmp.so.10.2.0
00007f6561f26000   2044K ----- libgmp.so.10.2.0
00007f6562125000      8K r---- libgmp.so.10.2.0
00007f6562127000      4K rw--- libgmp.so.10.2.0
00007f6562128000    152K r-x-- libhogweed.so.2.5
00007f656214e000   2044K ----- libhogweed.so.2.5
00007f656234d000      4K r---- libhogweed.so.2.5
00007f656234e000      4K rw--- libhogweed.so.2.5
00007f656234f000    188K r-x-- libnettle.so.4.7
00007f656237e000   2048K ----- libnettle.so.4.7
00007f656257e000      4K r---- libnettle.so.4.7
00007f656257f000      4K rw--- libnettle.so.4.7
00007f6562580000     68K r-x-- libtasn1.so.6.5.3
00007f6562591000   2048K ----- libtasn1.so.6.5.3
00007f6562791000      4K r---- libtasn1.so.6.5.3
00007f6562792000      4K rw--- libtasn1.so.6.5.3
00007f6562793000   1128K r-x-- libp11-kit.so.0.3.0
00007f65628ad000   2048K ----- libp11-kit.so.0.3.0
00007f6562aad000     40K r---- libp11-kit.so.0.3.0
00007f6562ab7000     40K rw--- libp11-kit.so.0.3.0
00007f6562ac1000      4K rw---   [ anon ]
00007f6562ac2000     56K r-x-- libkrb5support.so.0.1
00007f6562ad0000   2048K ----- libkrb5support.so.0.1
00007f6562cd0000      4K r---- libkrb5support.so.0.1
00007f6562cd1000      4K rw--- libkrb5support.so.0.1
00007f6562cd2000     32K r-x-- libcrypt-2.17.so
00007f6562cda000   2044K ----- libcrypt-2.17.so
00007f6562ed9000      4K r---- libcrypt-2.17.so
00007f6562eda000      4K rw--- libcrypt-2.17.so
00007f6562edb000    184K rw---   [ anon ]
00007f6562f09000     88K r-x-- libresolv-2.17.so
00007f6562f1f000   2048K ----- libresolv-2.17.so
00007f656311f000      4K r---- libresolv-2.17.so
00007f6563120000      4K rw--- libresolv-2.17.so
00007f6563121000      8K rw---   [ anon ]
00007f6563123000     16K r-x-- libcap.so.2.22
00007f6563127000   2044K ----- libcap.so.2.22
00007f6563326000      4K r---- libcap.so.2.22
00007f6563327000      4K rw--- libcap.so.2.22
00007f6563328000    312K r-x-- libdbus-1.so.3.14.14
00007f6563376000   2044K ----- libdbus-1.so.3.14.14
00007f6563575000      4K r---- libdbus-1.so.3.14.14
00007f6563576000      4K rw--- libdbus-1.so.3.14.14
00007f6563577000      4K rw---   [ anon ]
00007f6563578000    500K r-x-- libpulsecommon-10.0.so
00007f65635f5000   2048K ----- libpulsecommon-10.0.so
00007f65637f5000      8K r---- libpulsecommon-10.0.so
00007f65637f7000      4K rw--- libpulsecommon-10.0.so
00007f65637f8000     96K r-x-- libboost_iostreams-mt.so.1.53.0
00007f6563810000   2044K ----- libboost_iostreams-mt.so.1.53.0
00007f6563a0f000      8K r---- libboost_iostreams-mt.so.1.53.0
00007f6563a11000      4K rw--- libboost_iostreams-mt.so.1.53.0
00007f6563a12000    240K r-x-- libblkid.so.1.1.0
00007f6563a4e000   2044K ----- libblkid.so.1.1.0
00007f6563c4d000     12K r---- libblkid.so.1.1.0
00007f6563c50000      4K rw--- libblkid.so.1.1.0
00007f6563c51000      4K rw---   [ anon ]
00007f6563c52000      8K r-x-- libboost_random-mt.so.1.53.0
00007f6563c54000   2044K ----- libboost_random-mt.so.1.53.0
00007f6563e53000      4K r---- libboost_random-mt.so.1.53.0
00007f6563e54000      4K rw--- libboost_random-mt.so.1.53.0
00007f6563e55000     12K r-x-- libboost_system-mt.so.1.53.0
00007f6563e58000   2044K ----- libboost_system-mt.so.1.53.0
00007f6564057000      4K r---- libboost_system-mt.so.1.53.0
00007f6564058000      4K rw--- libboost_system-mt.so.1.53.0
00007f6564059000     84K r-x-- libboost_thread-mt.so.1.53.0
00007f656406e000   2044K ----- libboost_thread-mt.so.1.53.0
00007f656426d000      8K r---- libboost_thread-mt.so.1.53.0
00007f656426f000      4K rw--- libboost_thread-mt.so.1.53.0
00007f6564270000    384K r-x-- libpcre.so.1.2.0
00007f65642d0000   2048K ----- libpcre.so.1.2.0
00007f65644d0000      4K r---- libpcre.so.1.2.0
00007f65644d1000      4K rw--- libpcre.so.1.2.0
00007f65644d2000     84K r-x-- libgcc_s-4.8.5-20150702.so.1
00007f65644e7000   2044K ----- libgcc_s-4.8.5-20150702.so.1
00007f65646e6000      4K r---- libgcc_s-4.8.5-20150702.so.1
00007f65646e7000      4K rw--- libgcc_s-4.8.5-20150702.so.1
00007f65646e8000    932K r-x-- libstdc++.so.6.0.19
00007f65647d1000   2048K ----- libstdc++.so.6.0.19
00007f65649d1000     32K r---- libstdc++.so.6.0.19
00007f65649d9000      8K rw--- libstdc++.so.6.0.19
00007f65649db000     84K rw---   [ anon ]
00007f65649f0000    412K r-x-- libssl.so.1.0.2k
00007f6564a57000   2048K ----- libssl.so.1.0.2k
00007f6564c57000     16K r---- libssl.so.1.0.2k
00007f6564c5b000     28K rw--- libssl.so.1.0.2k
00007f6564c62000   2264K r-x-- libcrypto.so.1.0.2k
00007f6564e98000   2048K ----- libcrypto.so.1.0.2k
00007f6565098000    112K r---- libcrypto.so.1.0.2k
00007f65650b4000     52K rw--- libcrypto.so.1.0.2k
00007f65650c1000     16K rw---   [ anon ]
00007f65650c5000   1068K r-x-- libglusterfs.so.0.0.1
00007f65651d0000   2048K ----- libglusterfs.so.0.0.1
00007f65653d0000      8K r---- libglusterfs.so.0.0.1
00007f65653d2000      8K rw--- libglusterfs.so.0.0.1
00007f65653d4000     16K rw---   [ anon ]
00007f65653d8000     28K r-x-- libacl.so.1.1.0
00007f65653df000   2048K ----- libacl.so.1.1.0
00007f65655df000      4K r---- libacl.so.1.1.0
00007f65655e0000      4K rw--- libacl.so.1.1.0
00007f65655e1000    328K r-x-- libldap-2.4.so.2.10.7
00007f6565633000   2048K ----- libldap-2.4.so.2.10.7
00007f6565833000      8K r---- libldap-2.4.so.2.10.7
00007f6565835000      4K rw--- libldap-2.4.so.2.10.7
00007f6565836000     56K r-x-- liblber-2.4.so.2.10.7
00007f6565844000   2044K ----- liblber-2.4.so.2.10.7
00007f6565a43000      4K r---- liblber-2.4.so.2.10.7
00007f6565a44000      4K rw--- liblber-2.4.so.2.10.7
00007f6565a45000     12K r-x-- libcom_err.so.2.1
00007f6565a48000   2044K ----- libcom_err.so.2.1
00007f6565c47000      4K r---- libcom_err.so.2.1
00007f6565c48000      4K rw--- libcom_err.so.2.1
00007f6565c49000    196K r-x-- libk5crypto.so.3.1
00007f6565c7a000   2044K ----- libk5crypto.so.3.1
00007f6565e79000      8K r---- libk5crypto.so.3.1
00007f6565e7b000      4K rw--- libk5crypto.so.3.1
00007f6565e7c000    868K r-x-- libkrb5.so.3.3
00007f6565f55000   2044K ----- libkrb5.so.3.3
00007f6566154000     56K r---- libkrb5.so.3.3
00007f6566162000     12K rw--- libkrb5.so.3.3
00007f6566165000    296K r-x-- libgssapi_krb5.so.2.2
00007f65661af000   2048K ----- libgssapi_krb5.so.2.2
00007f65663af000      4K r---- libgssapi_krb5.so.2.2
00007f65663b0000      8K rw--- libgssapi_krb5.so.2.2
00007f65663b2000    200K r-x-- libidn.so.11.6.11
00007f65663e4000   2044K ----- libidn.so.11.6.11
00007f65665e3000      4K r---- libidn.so.11.6.11
00007f65665e4000      4K rw--- libidn.so.11.6.11
00007f65665e5000    500K r-x-- libgcrypt.so.11.8.2
00007f6566662000   2044K ----- libgcrypt.so.11.8.2
00007f6566861000      4K r---- libgcrypt.so.11.8.2
00007f6566862000     12K rw--- libgcrypt.so.11.8.2
00007f6566865000      4K rw---   [ anon ]
00007f6566866000   1808K r-x-- libc-2.17.so
00007f6566a2a000   2044K ----- libc-2.17.so
00007f6566c29000     16K r---- libc-2.17.so
00007f6566c2d000      8K rw--- libc-2.17.so
00007f6566c2f000     20K rw---   [ anon ]
00007f6566c34000   1028K r-x-- libm-2.17.so
00007f6566d35000   2044K ----- libm-2.17.so
00007f6566f34000      4K r---- libm-2.17.so
00007f6566f35000      4K rw--- libm-2.17.so
00007f6566f36000    640K r-x-- libpixman-1.so.0.34.0
00007f6566fd6000   2048K ----- libpixman-1.so.0.34.0
00007f65671d6000     32K r---- libpixman-1.so.0.34.0
00007f65671de000      4K rw--- libpixman-1.so.0.34.0
00007f65671df000     28K r-x-- libusbredirparser.so.1.0.0
00007f65671e6000   2044K ----- libusbredirparser.so.1.0.0
00007f65673e5000      4K r---- libusbredirparser.so.1.0.0
00007f65673e6000      4K rw--- libusbredirparser.so.1.0.0
00007f65673e7000     92K r-x-- libusb-1.0.so.0.1.0
00007f65673fe000   2048K ----- libusb-1.0.so.0.1.0
00007f65675fe000      4K r---- libusb-1.0.so.0.1.0
00007f65675ff000      4K rw--- libusb-1.0.so.0.1.0
00007f6567600000   1184K r-x-- libspice-server.so.1.12.4
00007f6567728000   2048K ----- libspice-server.so.1.12.4
00007f6567928000      8K r---- libspice-server.so.1.12.4
00007f656792a000      4K rw--- libspice-server.so.1.12.4
00007f656792b000     48K rw---   [ anon ]
00007f6567937000     96K r-x-- libibverbs.so.1.5.22.4
00007f656794f000   2044K ----- libibverbs.so.1.5.22.4
00007f6567b4e000      4K r---- libibverbs.so.1.5.22.4
00007f6567b4f000      4K rw--- libibverbs.so.1.5.22.4
00007f6567b50000     84K r-x-- librdmacm.so.1.1.22.4
00007f6567b65000   2044K ----- librdmacm.so.1.1.22.4
00007f6567d64000      4K r---- librdmacm.so.1.1.22.4
00007f6567d65000      4K rw--- librdmacm.so.1.1.22.4
00007f6567d66000      4K rw---   [ anon ]
00007f6567d67000    176K r-x-- libseccomp.so.2.3.1
00007f6567d93000   2044K ----- libseccomp.so.2.3.1
00007f6567f92000     84K r---- libseccomp.so.2.3.1
00007f6567fa7000      4K rw--- libseccomp.so.2.3.1
00007f6567fa8000     20K r-x-- libsnappy.so.1.1.4
00007f6567fad000   2044K ----- libsnappy.so.1.1.4
00007f65681ac000      4K r---- libsnappy.so.1.1.4
00007f65681ad000      4K rw--- libsnappy.so.1.1.4
00007f65681ae000    128K r-x-- liblzo2.so.2.0.0
00007f65681ce000   2044K ----- liblzo2.so.2.0.0
00007f65683cd000      4K r---- liblzo2.so.2.0.0
00007f65683ce000      4K rw--- liblzo2.so.2.0.0
00007f65683cf000   1208K r-x-- libgnutls.so.28.43.3
00007f65684fd000   2044K ----- libgnutls.so.28.43.3
00007f65686fc000     40K r---- libgnutls.so.28.43.3
00007f6568706000      8K rw--- libgnutls.so.28.43.3
00007f6568708000      4K rw---   [ anon ]
00007f6568709000    112K r-x-- libsasl2.so.3.0.0
00007f6568725000   2044K ----- libsasl2.so.3.0.0
00007f6568924000      4K r---- libsasl2.so.3.0.0
00007f6568925000      4K rw--- libsasl2.so.3.0.0
00007f6568926000    164K r-x-- libpng15.so.15.13.0
00007f656894f000   2048K ----- libpng15.so.15.13.0
00007f6568b4f000      4K r---- libpng15.so.15.13.0
00007f6568b50000      4K rw--- libpng15.so.15.13.0
00007f6568b51000     16K r-x-- libuuid.so.1.3.0
00007f6568b55000   2044K ----- libuuid.so.1.3.0
00007f6568d54000      4K r---- libuuid.so.1.3.0
00007f6568d55000      4K rw--- libuuid.so.1.3.0
00007f6568d56000    304K r-x-- libpulse.so.0.20.1
00007f6568da2000   2048K ----- libpulse.so.0.20.1
00007f6568fa2000      8K r---- libpulse.so.0.20.1
00007f6568fa4000      4K rw--- libpulse.so.0.20.1
00007f6568fa5000    992K r-x-- libasound.so.2.0.0
00007f656909d000   2044K ----- libasound.so.2.0.0
00007f656929c000     28K r---- libasound.so.2.0.0
00007f65692a3000      8K rw--- libasound.so.2.0.0
00007f65692a5000   5336K r-x-- librados.so.2.0.0
00007f65697db000   2048K ----- librados.so.2.0.0
00007f65699db000     76K r---- librados.so.2.0.0
00007f65699ee000     52K rw--- librados.so.2.0.0
00007f65699fb000 147460K rw---   [ anon ]
00007f65729fc000   7244K r-x-- librbd.so.1.0.0
00007f657310f000   2044K ----- librbd.so.1.0.0
00007f657330e000    104K r---- librbd.so.1.0.0
00007f6573328000     60K rw--- librbd.so.1.0.0
00007f6573337000 147464K rw---   [ anon ]
00007f657c339000      8K r-x-- libutil-2.17.so
00007f657c33b000   2044K ----- libutil-2.17.so
00007f657c53a000      4K r---- libutil-2.17.so
00007f657c53b000      4K rw--- libutil-2.17.so
00007f657c53c000      8K r-x-- libdl-2.17.so
00007f657c53e000   2048K ----- libdl-2.17.so
00007f657c73e000      4K r---- libdl-2.17.so
00007f657c73f000      4K rw--- libdl-2.17.so
00007f657c740000     92K r-x-- libpthread-2.17.so
00007f657c757000   2044K ----- libpthread-2.17.so
00007f657c956000      4K r---- libpthread-2.17.so
00007f657c957000      4K rw--- libpthread-2.17.so
00007f657c958000     16K rw---   [ anon ]
00007f657c95c000    232K r-x-- libnspr4.so
00007f657c996000   2044K ----- libnspr4.so
00007f657cb95000      4K r---- libnspr4.so
00007f657cb96000      8K rw--- libnspr4.so
00007f657cb98000      8K rw---   [ anon ]
00007f657cb9a000     16K r-x-- libplc4.so
00007f657cb9e000   2044K ----- libplc4.so
00007f657cd9d000      4K r---- libplc4.so
00007f657cd9e000      4K rw--- libplc4.so
00007f657cd9f000     12K r-x-- libplds4.so
00007f657cda2000   2044K ----- libplds4.so
00007f657cfa1000      4K r---- libplds4.so
00007f657cfa2000      4K rw--- libplds4.so
00007f657cfa3000    164K r-x-- libnssutil3.so
00007f657cfcc000   2044K ----- libnssutil3.so
00007f657d1cb000     28K r---- libnssutil3.so
00007f657d1d2000      4K rw--- libnssutil3.so
00007f657d1d3000   1176K r-x-- libnss3.so
00007f657d2f9000   2048K ----- libnss3.so
00007f657d4f9000     20K r---- libnss3.so
00007f657d4fe000      8K rw--- libnss3.so
00007f657d500000      8K rw---   [ anon ]
00007f657d502000    148K r-x-- libsmime3.so
00007f657d527000   2044K ----- libsmime3.so
00007f657d726000     12K r---- libsmime3.so
00007f657d729000      4K rw--- libsmime3.so
00007f657d72a000    332K r-x-- libssl3.so
00007f657d77d000   2048K ----- libssl3.so
00007f657d97d000     16K r---- libssl3.so
00007f657d981000      4K rw--- libssl3.so
00007f657d982000      4K rw---   [ anon ]
00007f657d983000   1104K r-x-- libglib-2.0.so.0.5600.1
00007f657da97000   2044K ----- libglib-2.0.so.0.5600.1
00007f657dc96000      4K r---- libglib-2.0.so.0.5600.1
00007f657dc97000      4K rw--- libglib-2.0.so.0.5600.1
00007f657dc98000      4K rw---   [ anon ]
00007f657dc99000      4K r-x-- libgthread-2.0.so.0.5600.1
00007f657dc9a000   2044K ----- libgthread-2.0.so.0.5600.1
00007f657de99000      4K r---- libgthread-2.0.so.0.5600.1
00007f657de9a000      4K rw--- libgthread-2.0.so.0.5600.1
00007f657de9b000    280K r-x-- libtcmalloc.so.4.4.5
00007f657dee1000   2048K ----- libtcmalloc.so.4.4.5
00007f657e0e1000      4K r---- libtcmalloc.so.4.4.5
00007f657e0e2000      4K rw--- libtcmalloc.so.4.4.5
00007f657e0e3000   1716K rw---   [ anon ]
00007f657e290000     28K r-x-- librt-2.17.so
00007f657e297000   2044K ----- librt-2.17.so
00007f657e496000      4K r---- librt-2.17.so
00007f657e497000      4K rw--- librt-2.17.so
00007f657e498000    172K r-x-- libssh2.so.1.0.1
00007f657e4c3000   2048K ----- libssh2.so.1.0.1
00007f657e6c3000      4K r---- libssh2.so.1.0.1
00007f657e6c4000      4K rw--- libssh2.so.1.0.1
00007f657e6c5000    112K r-x-- libgfxdr.so.0.0.1
00007f657e6e1000   2048K ----- libgfxdr.so.0.0.1
00007f657e8e1000      4K r---- libgfxdr.so.0.0.1
00007f657e8e2000      4K rw---   [ anon ]
00007f657e8e3000    108K r-x-- libgfrpc.so.0.0.1
00007f657e8fe000   2048K ----- libgfrpc.so.0.0.1
00007f657eafe000      4K r---- libgfrpc.so.0.0.1
00007f657eaff000    140K rw--- libgfrpc.so.0.0.1
00007f657eb22000    188K r-x-- libgfapi.so.0.0.0
00007f657eb51000   2044K ----- libgfapi.so.0.0.0
00007f657ed50000      4K r---- libgfapi.so.0.0.0
00007f657ed51000      4K rw--- libgfapi.so.0.0.0
00007f657ed52000    408K r-x-- libcurl.so.4.3.0
00007f657edb8000   2048K ----- libcurl.so.4.3.0
00007f657efb8000      8K r---- libcurl.so.4.3.0
00007f657efba000      4K rw--- libcurl.so.4.3.0
00007f657efbb000      4K rw---   [ anon ]
00007f657efbc000    116K r-x-- libiscsi.so.2.0.10900
00007f657efd9000   2044K ----- libiscsi.so.2.0.10900
00007f657f1d8000      4K r---- libiscsi.so.2.0.10900
00007f657f1d9000      4K rw--- libiscsi.so.2.0.10900
00007f657f1da000      4K r-x-- libaio.so.1.0.1
00007f657f1db000   2044K ----- libaio.so.1.0.1
00007f657f3da000      4K r---- libaio.so.1.0.1
00007f657f3db000      4K rw--- libaio.so.1.0.1
00007f657f3dc000     84K r-x-- libz.so.1.2.7
00007f657f3f1000   2044K ----- libz.so.1.2.7
00007f657f5f0000      4K r---- libz.so.1.2.7
00007f657f5f1000      4K rw--- libz.so.1.2.7
00007f657f5f2000    136K r-x-- ld-2.17.so
00007f657f6a2000     12K rw-s-   [ anon ]
00007f657f6a5000     12K rw-s-   [ anon ]
00007f657f6a8000     12K rw-s-   [ anon ]
00007f657f6ab000     12K rw-s-   [ anon ]
00007f657f6ae000     12K rw-s-   [ anon ]
00007f657f6b1000     12K rw-s-   [ anon ]
00007f657f6b4000     12K rw-s-   [ anon ]
00007f657f6b7000     12K rw-s-   [ anon ]
00007f657f6ba000     12K rw-s-   [ anon ]
00007f657f6bd000     12K rw-s-   [ anon ]
00007f657f6c0000     12K rw-s-   [ anon ]
00007f657f6c3000     12K rw-s-   [ anon ]
00007f657f6c6000     12K rw-s-   [ anon ]
00007f657f6c9000     12K rw-s-   [ anon ]
00007f657f6cc000      4K -----   [ anon ]
00007f657f6cd000   1256K rw---   [ anon ]
00007f657f808000     12K rw-s-   [ anon ]
00007f657f80b000     12K rw-s-   [ anon ]
00007f657f80e000      4K rw-s- zero (deleted)
00007f657f80f000     12K rw-s- zero (deleted)
00007f657f812000      4K rw---   [ anon ]
00007f657f813000      4K r---- ld-2.17.so
00007f657f814000      4K rw--- ld-2.17.so
00007f657f815000      4K rw---   [ anon ]
00007ffe8580d000    132K rw---   [ stack ]
00007ffe8595d000      8K r-x--   [ anon ]
ffffffffff600000      4K r-x--   [ anon ]
 total          5751900K

4. Stop the qemu-kvm process while the fio is still running, and check the result of valgrid.
# valgrind sh qemu.sh RHEL-7.9-x86_64-latest.qcow2 
==23474== Memcheck, a memory error detector
==23474== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==23474== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==23474== Command: sh qemu.sh RHEL-7.9-x86_64-latest.qcow2
==23474== 
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) q ------------------------------------------------------> Stop qemu-kvm here.
==23474== 
==23474== HEAP SUMMARY:
==23474==     in use at exit: 27,965 bytes in 585 blocks
==23474==   total heap usage: 1,756 allocs, 1,171 frees, 53,509 bytes allocated
==23474== 
==23474== LEAK SUMMARY:
==23474==    definitely lost: 0 bytes in 0 blocks
==23474==    indirectly lost: 0 bytes in 0 blocks
==23474==      possibly lost: 0 bytes in 0 blocks
==23474==    still reachable: 27,965 bytes in 585 blocks
==23474==         suppressed: 0 bytes in 0 blocks
==23474== Rerun with --leak-check=full to see details of leaked memory
==23474== 
==23474== For lists of detected and suppressed errors, rerun with: -s
==23474== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) -------------------------> No errors.

Comment 6 Tingting Mao 2021-09-24 12:12:49 UTC
Seems reproduced this bug? Could anyone help to check whether the reproduce steps are okay? Thanks.


Tested env:
qemu-kvm-1.5.3-175.el7_9.4.x86_64
kernel-3.10.0-1160.el7.x86_64


Steps:
1. Boot guest under valgrind
# valgrind --trace-children=yes --track-origins=yes --leak-check=full --show-leak-kinds=definite --log-file=/tmp/valgrind_qemu.log sh qemu.sh RHEL-7.9-x86_64-latest.qcow2

Note:
RBD image info:
# qemu-img info rbd:rbd/example/disk
image: rbd:rbd/example/disk
file format: raw
virtual size: 80G (85899345920 bytes)
disk size: unavailable

QEMU commands for booting the guest:
# cat qemu.sh 
/usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-sandbox off  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=$1 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=rbd:rbd/example/disk \ --------------------------> RBD image is worked as a data disk.
	-device scsi-hd,id=image2,drive=drive_image2 \
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	-smp 16,cores=8,threads=1,sockets=2  \
	-cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+invtsc \
	-vnc :0  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \

2. Make file system for the data disk and mount it and exec fio in guest
(guest)# mkfs.xfs /dev/sdb
(guest)# mount /dev/sdb /root
(guest)# # fio leak_test.fio 
iometer: (g=0): rw=randrw, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=8
iometer_just_write: (g=1): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=8
iometer_just_read: (g=2): rw=read, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=8
fio-3.7
Starting 3 processes
Jobs: 1 (f=1): [m(1),P(2)][1.9%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01h:29m:12s]               
Message from syslogd@bootp-73-11-18 at Sep 24 17:28:44 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:1:100] -----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][6.8%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01h:24m:00s]        
Message from syslogd@bootp-73-11-18 at Sep 24 17:33:07 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 29s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][8.7%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01h:20m:38s]        
Message from syslogd@bootp-73-11-18 at Sep 24 17:34:39 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][17.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01h:12m:37s]        
Message from syslogd@bootp-73-11-18 at Sep 24 17:41:52 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][24.6%][r=0KiB/s,w=4096KiB/s][r=0,w=1 IOPS][eta 01h:06m:40s]     
Message from syslogd@bootp-73-11-18 at Sep 24 17:48:42 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][25.9%][r=4096KiB/s,w=0KiB/s][r=1,w=0 IOPS][eta 01h:05m:36s]     
Message from syslogd@bootp-73-11-18 at Sep 24 17:49:57 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][29.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01h:02m:56s]        
Message from syslogd@bootp-73-11-18 at Sep 24 17:53:18 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][31.5%][r=0KiB/s,w=4100KiB/s][r=0,w=1 IOPS][eta 01h:01m:37s]     
Message from syslogd@bootp-73-11-18 at Sep 24 17:55:21 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 31s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.
Jobs: 1 (f=1): [m(1),P(2)][36.2%][r=0KiB/s,w=4100KiB/s][r=0,w=1 IOPS][eta 57m:10s]        
Message from syslogd@bootp-73-11-18 at Sep 24 17:59:26 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:1:100]-----------------------------------------------> CPU soft lockup.

3. Quit the qemu process and check the log
# valgrind --trace-children=yes --track-origins=yes --leak-check=full --show-leak-kinds=definite --log-file=/tmp/valgrind_qemu.log sh qemu.sh RHEL-7.9-x86_64-latest.qcow2
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) c
(qemu) q ---------------------------------------------------------> Quit here.

# cat /tmp/valgrind_qemu.log.xfs
......

==2978== 82,944 (66,816 direct, 16,128 indirect) bytes in 522 blocks are definitely lost in loss record 2,573 of 2,588
==2978==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==2978==    by 0x67E16DD: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==2978==    by 0x67F8CDD: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==2978==    by 0x19754E: qemu_aio_get (block.c:4808)
==2978==    by 0x1D8003: dma_bdrv_io (dma-helpers.c:208)
==2978==    by 0x1D808C: dma_bdrv_read (dma-helpers.c:231)
==2978==    by 0x237AE0: scsi_do_read (scsi-disk.c:360)
==2978==    by 0x317411: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==2978==    by 0x25ED05: qemu_iohandler_poll (iohandler.c:143)
==2978==    by 0x26327F: main_loop_wait (main-loop.c:476)
==2978==    by 0x1818BF: main_loop (vl.c:1997)
==2978==    by 0x1818BF: main (vl.c:4367)
==2978==
==2978== 715,904 (628,864 direct, 87,040 indirect) bytes in 4,913 blocks are definitely lost in loss record 2,585 of 2,588
==2978==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==2978==    by 0x67E16DD: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==2978==    by 0x67F8CDD: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==2978==    by 0x19754E: qemu_aio_get (block.c:4808)
==2978==    by 0x1D8003: dma_bdrv_io (dma-helpers.c:208)
==2978==    by 0x1D80BC: dma_bdrv_write (dma-helpers.c:239)
==2978==    by 0x2382DC: scsi_write_data (scsi-disk.c:530)
==2978==    by 0x317411: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==2978==    by 0x25ED05: qemu_iohandler_poll (iohandler.c:143)
==2978==    by 0x26327F: main_loop_wait (main-loop.c:476)
==2978==    by 0x1818BF: main_loop (vl.c:1997)
==2978==    by 0x1818BF: main (vl.c:4367)
==2978==
==2978== LEAK SUMMARY:
==2978==    definitely lost: 719,374 bytes in 5,458 blocks
==2978==    indirectly lost: 103,520 bytes in 824 blocks
==2978==      possibly lost: 19,144 bytes in 58 blocks
==2978==    still reachable: 10,716,409 bytes in 6,153 blocks
==2978==                       of which reachable via heuristic:
==2978==                         stdstring          : 30 bytes in 1 blocks
==2978==                         newarray           : 1,536 bytes in 16 blocks
==2978==         suppressed: 0 bytes in 0 blocks
==2978== Reachable blocks (those to which a pointer was found) are not shown.
==2978== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==2978==
==2978== For lists of detected and suppressed errors, rerun with: -s
==2978== ERROR SUMMARY: 6096619 errors from 974 contexts (suppressed: 0 from 0) ------------------------------------------> Errors

Comment 7 Tingting Mao 2021-09-26 02:06:19 UTC
While tried the steps in comment 6 with virtio-blk to boot the guest. There is no loss record in the log file.

Steps and test env are the same as the ones in comment 6, but the booting CML is:
/usr/libexec/qemu-kvm \
        -name 'avocado-vt-vm1'  \
        -sandbox off  \
        -machine pc  \
        -nodefaults  \
        -vga std  \
        -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=$1 \
        -device virtio-blk-pci,drive=drive_image1,id=os-disk \
        -drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=rbd:rbd/example/disk \
        -device virtio-blk-pci,drive=drive_image2,id=data-disk \
        -device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
        -netdev tap,id=ids4KA3w,vhost=on \
        -m 4G  \
        -smp 16,cores=8,threads=1,sockets=2  \
        -cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+invtsc \
        -vnc :0  \
        -rtc base=localtime,clock=host,driftfix=slew  \
        -boot menu=off,strict=off,order=cdn,once=c \
        -enable-kvm \
        -monitor stdio \


Results:
......
==10153== Syscall param ioctl(generic) points to uninitialised byte(s)
==10153==    at 0x1D8EA307: ioctl (in /usr/lib64/libc-2.17.so)
==10153==    by 0x31FF21: kvm_vm_ioctl (kvm-all.c:1760)
==10153==    by 0x2FEF1B: kvm_pit_put (i8254.c:157)
==10153==    by 0x1E260C: pcspk_io_write (pcspk.c:149)
==10153==    by 0x323012: access_with_adjusted_size (memory.c:365)
==10153==    by 0x3249A2: memory_region_iorange_write (memory.c:471)
==10153==    by 0x3220F1: kvm_handle_io (kvm-all.c:1526)
==10153==    by 0x3220F1: kvm_cpu_exec (kvm-all.c:1683)
==10153==    by 0x2D3FC4: qemu_kvm_cpu_thread_fn (cpus.c:802)
==10153==    by 0x7AD4EA4: start_thread (in /usr/lib64/libpthread-2.17.so)
==10153==    by 0x1D8F396C: clone (in /usr/lib64/libc-2.17.so)
==10153==  Address 0x297adbcc is on thread 3's stack
==10153==  in frame #2, created by kvm_pit_put (i8254.c:126)
==10153==  Uninitialised value was created by a stack allocation
==10153==    at 0x2FF1F5: kvm_pit_get (i8254.c:190)
==10153==    by 0x2FF1F5: kvm_pit_set_gate (i8254.c:170)
==10153==
==10153== Syscall param ioctl(generic) points to uninitialised byte(s)
==10153==    at 0x1D8EA307: ioctl (in /usr/lib64/libc-2.17.so)
==10153==    by 0x31FF21: kvm_vm_ioctl (kvm-all.c:1760)
==10153==    by 0x320E09: kvm_physical_sync_dirty_bitmap (kvm-all.c:442)
==10153==    by 0x3213CB: kvm_set_phys_mem (kvm-all.c:670)
==10153==    by 0x3243CB: address_space_update_topology_pass.isra.5 (memory.c:721)
==10153==    by 0x3252C2: address_space_update_topology (memory.c:757)
==10153==    by 0x3252C2: memory_region_transaction_commit (memory.c:782)
==10153==    by 0x22DE86: pci_update_mappings (pci.c:1111)
==10153==    by 0x22E197: pci_default_write_config (pci.c:1166)
==10153==    by 0x323012: access_with_adjusted_size (memory.c:365)
==10153==    by 0x3249A2: memory_region_iorange_write (memory.c:471)
==10153==    by 0x322334: kvm_handle_io (kvm-all.c:1529)
==10153==    by 0x322334: kvm_cpu_exec (kvm-all.c:1683)
==10153==    by 0x2D3FC4: qemu_kvm_cpu_thread_fn (cpus.c:802)
==10153==  Address 0x297ad834 is on thread 3's stack
==10153==  in frame #2, created by kvm_physical_sync_dirty_bitmap (kvm-all.c:402)
==10153==  Uninitialised value was created by a stack allocation
==10153==    at 0x320D10: kvm_physical_sync_dirty_bitmap (kvm-all.c:402)
==10153==

Comment 8 Tingting Mao 2021-09-26 03:21:10 UTC
Hi Germano,

I tried a file image in local with virtio-scsi, hit the mem leak as well. Could you please help to check whether the steps are okay or does customer hit the mem leak with local image files either?

Thanks.



Tested env:
qemu-kvm-1.5.3-175.el7_9.4.x86_64
kernel-3.10.0-1160.el7.x86_64


Steps:
1. Boot guest under valgrind
# valgrind --trace-children=yes --track-origins=yes --leak-check=full --show-leak-kinds=definite --log-file=/tmp/valgrind_qemu.log sh qemu_file.sh RHEL-7.9-x86_64-latest.qcow2

Note:
local image file info:
# qemu-img create -f raw data.img 80G -----------------------------------> Create a local image file.
# qemu-img info data.img 
image: data.img
file format: raw
virtual size: 80G (85899345920 bytes)
disk size: 0

QEMU commands for booting the guest:
# cat qemu_file.sh 
/usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-sandbox off  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=$1 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=data.img \ -----------------------------------------> The local image file as the data disk.
	-device scsi-hd,id=image2,drive=drive_image2 \
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	-smp 16,cores=8,threads=1,sockets=2  \
	-cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+invtsc \
	-vnc :0  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \


2. Make file system for the data disk and mount it and exec fio in guest
(guest)# mkfs.xfs /dev/sdb
(guest)# mount /dev/sdb /root
(guest)# fio leak_test.fio 
iometer: (g=0): rw=randrw, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=8
iometer_just_write: (g=1): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=8
iometer_just_read: (g=2): rw=read, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=8
fio-3.7
Starting 3 processes
iometer: Laying out IO file (1 file / 81920MiB)

Message from syslogd@bootp-73-11-18 at Sep 26 10:52:02 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:01:46 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:2:121]

Message from syslogd@bootp-73-11-18 at Sep 26 11:07:14 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:08:29 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:08:29 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:1:100]

Message from syslogd@bootp-73-11-18 at Sep 26 11:12:49 ...
 kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 30s! [kworker/0:1:100]

Note:
# cat leak_test.fio 
[global]
randrepeat=0
filename=/root/test.dat
iodepth=8
size=80g
direct=0
ioengine=libaio

[iometer]
stonewall
bs=4M
rw=randrw

[iometer_just_write]
stonewall
bs=4M
rw=write

[iometer_just_read]
stonewall
bs=4M
rw=read


3. Quit the qemu process and check the log
# valgrind --trace-children=yes --track-origins=yes --leak-check=full --show-leak-kinds=definite --log-file=/tmp/valgrind_qemu.log sh qemu_file.sh RHEL-7.9-x86_64-latest.qcow2
QEMU 1.5.3 monitor - type 'help' for more information
(qemu) q ---------------------------------------------------------> Quit here.

# cat /tmp/valgrind_qemu.log
......
==10647== 86,016 (68,480 direct, 17,536 indirect) bytes in 535 blocks are definitely lost in loss record 2,485 of 2,495
==10647==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==10647==    by 0x67E16DD: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==10647==    by 0x67F8CDD: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==10647==    by 0x19754E: qemu_aio_get (block.c:4808)
==10647==    by 0x1D8003: dma_bdrv_io (dma-helpers.c:208)
==10647==    by 0x1D808C: dma_bdrv_read (dma-helpers.c:231)
==10647==    by 0x237AE0: scsi_do_read (scsi-disk.c:360)
==10647==    by 0x317411: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==10647==    by 0x25ED05: qemu_iohandler_poll (iohandler.c:143)
==10647==    by 0x26327F: main_loop_wait (main-loop.c:476)
==10647==    by 0x1818BF: main_loop (vl.c:1997)
==10647==    by 0x1818BF: main (vl.c:4367)
==10647==
==10647== 938,496 (827,264 direct, 111,232 indirect) bytes in 6,463 blocks are definitely lost in loss record 2,493 of 2,495
==10647==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==10647==    by 0x67E16DD: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==10647==    by 0x67F8CDD: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==10647==    by 0x19754E: qemu_aio_get (block.c:4808)
==10647==    by 0x1D8003: dma_bdrv_io (dma-helpers.c:208)
==10647==    by 0x1D80BC: dma_bdrv_write (dma-helpers.c:239)
==10647==    by 0x2382DC: scsi_write_data (scsi-disk.c:530)
==10647==    by 0x317411: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==10647==    by 0x25ED05: qemu_iohandler_poll (iohandler.c:143)
==10647==    by 0x26327F: main_loop_wait (main-loop.c:476)
==10647==    by 0x1818BF: main_loop (vl.c:1997)
==10647==    by 0x1818BF: main (vl.c:4367)
==10647==
==10647== LEAK SUMMARY:
==10647==    definitely lost: 917,038 bytes in 7,020 blocks
==10647==    indirectly lost: 129,120 bytes in 1,024 blocks
==10647==      possibly lost: 73,352 bytes in 135 blocks
==10647==    still reachable: 4,749,901 bytes in 6,020 blocks
==10647==                       of which reachable via heuristic:
==10647==                         stdstring          : 30 bytes in 1 blocks
==10647==                         newarray           : 1,536 bytes in 16 blocks
==10647==         suppressed: 0 bytes in 0 blocks
==10647== Reachable blocks (those to which a pointer was found) are not shown.
==10647== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==10647==
==10647== For lists of detected and suppressed errors, rerun with: -s
==10647== ERROR SUMMARY: 96302 errors from 101 contexts (suppressed: 0 from 0) --------------------------------------------> Still errors.

Comment 9 Germano Veit Michel 2021-09-26 22:16:39 UTC
Sorry for not replying earlier.

I had trouble reproducing this problem as well, its not very clear to me what exactly triggers it.

(In reply to Tingting Mao from comment #8)
> Hi Germano,
> 
> I tried a file image in local with virtio-scsi, hit the mem leak as well.

Interesting, it seems to be the same leak according to valgring, but your numbers are much lower.
If you leave it running for several hours with fio looping in the Guest, do you also get huge amounts of leaked memory?
The customer scenario would quickly go to several hundred MB and even GB after a while.

I also did not get CPU softlockups on customer logs, I think that might be something different and not related.

> Could you please help to check whether the steps are okay or does customer
> hit the mem leak with local image files either?

We tried with krbd on the customer (local block device instead of local file) and it did not reproduce.

Your reproduction steps from comment #6 are exactly what the customer has hit (using librbd), so maybe we can use that for verifying the bug later.

But now I'm surprised this also happens on file storage, we should have seen this before on RHEL7, unless some change created this leak recently.

Maybe we should get dev input first?

Thanks for working on this!

Comment 10 John Ferlan 2021-09-27 18:30:35 UTC
Klaus - Stefano has taken care of RBD/Ceph historically, although this may be something more related to SCSI processing if I'm reading comment 7 correctly.

Comment 11 Klaus Heinrich Kiwi 2021-09-27 18:58:36 UTC
(In reply to John Ferlan from comment #10)
> Klaus - Stefano has taken care of RBD/Ceph historically, although this may
> be something more related to SCSI processing if I'm reading comment 7
> correctly.

Stefano investigated and closed Bug 1731078 and that investigation caused the creation of another memory leak Bug 1975640 which is in our backlog as Low Priority.

Stefano, can you investigate this one nevertheless? As this one shows pretty severe memory growth, I'm assigning it a high priority. 

Thanks,

 -Klaus

Comment 12 Stefano Garzarella 2021-09-28 14:35:47 UTC
(In reply to Klaus Heinrich Kiwi from comment #11)
> (In reply to John Ferlan from comment #10)
> > Klaus - Stefano has taken care of RBD/Ceph historically, although this may
> > be something more related to SCSI processing if I'm reading comment 7
> > correctly.
> 
> Stefano investigated and closed Bug 1731078 and that investigation caused
> the creation of another memory leak Bug 1975640 which is in our backlog as
> Low Priority.
> 
> Stefano, can you investigate this one nevertheless? As this one shows pretty
> severe memory growth, I'm assigning it a high priority. 

Yep, I had a quick look and it seems a leak in dma-helpers.c.
Unfortunately that code has changed a lot in the last few years, so the current upstream version is very different.

@timao can you try with latest qemu-kvm in RHEL 8 to see if we have the same problem or has it been fixed over time?

Comment 13 Tingting Mao 2021-09-29 10:04:04 UTC
Tried with RBD image and local image in latest rhel8, the leak is both very little.


Test env:
qemu-kvm-6.1.50-1.scrmod+el8.6.0+12715+7e2a0318.wrb210922
kernel-4.18.0-339.el8.x86_64


Steps and the test RBD image are the same as the ones in comment 6, but the boot CML is:
/usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 15360  \
    -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
    -cpu 'Haswell-noTSX',+kvm_pv_unhalt \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:1c:0c:0d:e3:4c,id=idjmZXQS,netdev=idEFQ4i1,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idEFQ4i1,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -monitor stdio \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=5 \
    -device virtio-scsi-pci,id=virtio_scsi_pci2,bus=pcie-root-port-5,addr=0x0 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=RHEL-7.9-x86_64-latest.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x1.0x6,bus=pcie.0,chassis=6 \
    -device virtio-scsi-pci,id=virtio_scsi_pci3,bus=pcie-root-port-6,addr=0x0 \
    -blockdev node-name=file_image2,driver=rbd,auto-read-only=on,discard=unmap,pool=rbd,image=example/disk,cache.direct=on,cache.no-flush=off \  ---------------------------------------> RBD image is here.
    -blockdev node-name=drive_image2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image2 \
    -device scsi-hd,id=image2,drive=drive_image2,write-cache=on \
    -chardev socket,server=on,path=/var/tmp/monitor-qmpmonitor1-20210721-024113-AsZ7KYro,id=qmp_id_qmpmonitor1,wait=off  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \


Result for RBD image:
# cat /tmp/valgrind_qemu.log
==70761== Memcheck, a memory error detector
==70761== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==70761== Using Valgrind-3.17.0 and LibVEX; rerun with -h for copyright info
==70761== Command: /usr/sbin/ip link set tap0 nomaster
==70761== Par==70473== 
==70473== HEAP SUMMARY:
==70473==     in use at exit: 55,825 bytes in 858 blocks
==70473==   total heap usage: 3,972 allocs, 3,114 frees, 140,329 bytes allocated
==70473== 
==70473== LEAK SUMMARY:
==70473==    definitely lost: 0 bytes in 0 blocks
==70473==    indirectly lost: 0 bytes in 0 blocks
==70473==      possibly lost: 0 bytes in 0 blocks
==70473==    still reachable: 55,825 bytes in 858 blocks
==70473==         suppressed: 0 bytes in 0 blocks
==70473== Reachable blocks (those to which a pointer was found) are not shown.
==70473== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==70473== 
==70473== For lists of detected and suppressed errors, rerun with: -s
==70473== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
 from 0)
==70474== 
==70474== HEAP SUMMARY:
==70474==     in use at exit: 20,573,178 bytes in 31,299 blocks
==70474==   total heap usage: 1,709,780 allocs, 1,678,481 frees, 5,280,698,513 bytes allocated
==70474== 
==70474== LEAK SUMMARY:
==70474==    definitely lost: 0 bytes in 0 blocks
==70474==    indirectly lost: 0 bytes in 0 blocks
==70474==      possibly lost: 11,852 bytes in 44 blocks
==70474==    still reachable: 20,548,108 bytes in 31,252 blocks
==70474==                       of which reachable via heuristic:
==70474==                         newarray           : 32 bytes in 1 blocks
==70474==         suppressed: 13,218 bytes in 3 blocks
==70474== Reachable blocks (those to which a pointer was found) are not shown.
==70474== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==70474== 
==70474== For lists of detected and suppressed errors, rerun with: -s
==70474== ERROR SUMMARY: 67 errors from 30 contexts (suppressed: 0 from 0) ------------------------------> Very little.


Result for local image:
# cat /tmp/valgrind_qemu.log.file 
==69486== Memcheck, a memory error detector
==69486== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==69486== Using Valgrind-3.17.0 and LibVEX; rerun with -h for copyright info
==69486== Command: /usr/sbin/ip link set tap0 nomaster
==69486== Par==69242== 
==69242== HEAP SUMMARY:
==69242==     in use at exit: 55,831 bytes in 858 blocks
==69242==   total heap usage: 3,976 allocs, 3,118 frees, 140,621 bytes allocated
==69242== 
==69242== LEAK SUMMARY:
==69242==    definitely lost: 0 bytes in 0 blocks
==69242==    indirectly lost: 0 bytes in 0 blocks
==69242==      possibly lost: 0 bytes in 0 blocks
==69242==    still reachable: 55,831 bytes in 858 blocks
==69242==         suppressed: 0 bytes in 0 blocks
==69242== Reachable blocks (those to which a pointer was found) are not shown.
==69242== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==69242== 
==69242== For lists of detected and suppressed errors, rerun with: -s
==69242== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
 from 0)
==69243== 
==69243== HEAP SUMMARY:
==69243==     in use at exit: 20,498,659 bytes in 31,186 blocks
==69243==   total heap usage: 1,122,700 allocs, 1,091,514 frees, 1,141,235,278 bytes allocated
==69243== 
==69243== LEAK SUMMARY:
==69243==    definitely lost: 0 bytes in 0 blocks
==69243==    indirectly lost: 0 bytes in 0 blocks
==69243==      possibly lost: 34,636 bytes in 105 blocks
==69243==    still reachable: 20,464,023 bytes in 31,081 blocks
==69243==         suppressed: 0 bytes in 0 blocks
==69243== Reachable blocks (those to which a pointer was found) are not shown.
==69243== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==69243== 
==69243== For lists of detected and suppressed errors, rerun with: -s
==69243== ERROR SUMMARY: 71 errors from 33 contexts (suppressed: 0 from 0)

Comment 14 qing.wang 2021-09-30 07:35:27 UTC
I tested on local file storage about 1.5 hour , i did not find big memory leak.
In my understanding , the comment #0 does not indicate there are many memory leak on it 
274,216,128 -> about 270M ,  it dose not match another 33G allocated , 
So i think the question should relate to why have so many memory using at running time,
or why does not free in time?  am i right ?


My test steps:
1. create image
qemu-img create -f raw data1.raw 80G

2.boot vm
/usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	\
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/home/kvm_autotest_root/images/data1.raw \
	-device scsi-hd,id=image2,drive=drive_image2 \
	\
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	-smp 16,cores=8,threads=1,sockets=2  \
	-cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+invtsc \
	-vnc :5  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \


3. run fio in guest
./leak.sh

root@localhost /home $ cat leak.sh
mount /dev/sdb /home/x

while true;do fio leak.fio;echo "============";done

root@localhost /home $ cat leak.fio 
[global]
randrepeat=0
filename=/home/x/test.dat
iodepth=8
size=70g
direct=0
ioengine=libaio

[iometer]
stonewall
bs=4M
rw=randrw

[iometer_just_write]
stonewall
bs=4M
rw=write

[iometer_just_read]
stonewall
bs=4M
rw=read


4.monitor the memory usage on host

pid=`pgrep qemu-kvm`;echo "$pid";file=2.log;while true ;date >> $file ;do ps -e -o 'pid,comm,rsz,vsz' |awk -v pid="$pid" '$1 == pid' >>$file;sleep 20;done


5. wait about 1.5 hour and quit qemu 

6.check the log
cat 2.log

test result 1 on
Red Hat Enterprise Linux release 9.0 Beta (Plow)
5.14.0-3.el9.x86_64
qemu-kvm-6.0.0-13.el9_b.4.x86_64


Wed Sep 29 07:06:15 PM EDT 2021
   6064 qemu-kvm        4157496 9718332
Wed Sep 29 07:06:35 PM EDT 2021
   6064 qemu-kvm        4170044 10078940
Wed Sep 29 07:06:55 PM EDT 2021
   6064 qemu-kvm        4184380 10080996

..........


Wed Sep 29 09:29:01 PM EDT 2021
   6064 qemu-kvm        3586332 11007308
Wed Sep 29 09:29:21 PM EDT 2021
   6064 qemu-kvm        3586600 11007308



test result 2 on :
Red Hat Enterprise Linux Server release 7.9 (Maipo)
3.10.0-1160.43.1.el7.x86_64
qemu-kvm-1.5.3-175.el7_9.4.x86_64

11718 qemu-kvm        4131248 5608028
Wed Sep 29 22:53:54 EDT 2021
11718 qemu-kvm        4131772 5615196
Wed Sep 29 22:54:14 EDT 2021
11718 qemu-kvm        4131772 5615196
....
Thu Sep 30 01:54:43 EDT 2021
11718 qemu-kvm        4163852 5615196
Thu Sep 30 01:55:03 EDT 2021
11718 qemu-kvm        4163852 5615196


(It looks like have 30M memory leak but not the issue point.)

Comment 15 Germano Veit Michel 2021-09-30 22:11:45 UTC
(In reply to qing.wang from comment #14)
> I tested on local file storage about 1.5 hour , i did not find big memory
> leak.
> In my understanding , the comment #0 does not indicate there are many memory
> leak on it 
> 274,216,128 -> about 270M ,  it dose not match another 33G allocated , 
> So i think the question should relate to why have so many memory using at
> running time,
> or why does not free in time?  am i right ?

Sorry if it wasn't completely clear. The valgrind on comment #0 is the result of several hours of valgring with fio on the customer side. While the 33G is the result of the VM running for a very long time (months), and not under valgrind. The 33G usage is what triggered the investigation.

If we attempt to reproduce for a few hours we will probably not get to 33G of leak, it will need a very long runtime with a lot of IO load.

Is the clearer?

Comment 16 Germano Veit Michel 2021-09-30 22:14:00 UTC
By the way, a 24h test on customer side with fio in a loop resulted in a 300-400MB leak several times.

Comment 17 Stefano Garzarella 2021-10-01 14:51:55 UTC
Can we try a 24h test with RHEL 8 to compare with the customer leak (300/400 MB).
Just to know if we still have this issue, because there are a lot of changes from that QEMU version, also in the release path of dma helpers.

Comment 19 qing.wang 2021-10-09 08:59:55 UTC
I tested RHEL8 24h comment #14 with local file , not find big mem leak, 

Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
4.18.0-348.el8.x86_64
qemu-kvm-6.0.0-31.module+el8.5.0+12787+aaa8bdfa.x86_64
seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch

I monitored the memory usage every minute
Please refer to http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-09/080432.log

The log indicates only about 15M real mem and 30M virtual mem increment in 24H.


monitor script:
t=`date "+%d%H%M"`;file="$t.log";echo "$file";pid=`pgrep qemu-kvm`;echo "$pid";while true ;date >> $file ;do ps -e -o 'pid,comm,rsz,vsz' |awk -v pid="$pid" '$1 == pid' >>$file;sleep 60;done

Vm:

/usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	\
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/home/kvm_autotest_root/images/data1.raw \
	-device scsi-hd,id=image2,drive=drive_image2 \
	\
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	-smp 16,cores=8,threads=1,sockets=2  \
	-cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
	-vnc :5  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \

Comment 20 qing.wang 2021-10-11 02:54:19 UTC
Run same test like comment #19 with nbd backend about 40H
It got about 45M real mem,793M virtual mem leaks

Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
4.18.0-348.el8.x86_64
qemu-kvm-6.0.0-31.module+el8.5.0+12787+aaa8bdfa.x86_64
seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch

I monitored the memory usage every minute
Please refer to http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-10/090557.log

usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	\
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=rbd:rbd/disk \
	-device scsi-hd,id=image2,drive=drive_image2 \
	\
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e4,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	-smp 16,cores=8,threads=1,sockets=2  \
	-cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
	-vnc :5  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \

Comment 22 Germano Veit Michel 2021-10-12 00:11:40 UTC
The customer VM at comment #0 was 4GB, the VM RAM allocation is separate from the memory used by IO.

Maybe the difference lies in performance (i.e. storage side), so one can achieve more IOPs/Leaks than the other.

Anyway, it seems we can reproduce the leaks already.

Comment 23 qing.wang 2021-10-13 13:38:31 UTC
Run same test like comment #20 with rbd and local file backend about 24H
It got about 110M virtual mem leaks of the rbd 
no virtual memory increment in the running time for the local file backend.

Red Hat Enterprise Linux Server release 7.9 (Maipo)
3.10.0-1160.45.1.el7.x86_64
qemu-kvm-1.5.3-175.el7_9.4.x86_64
seabios-bin-1.11.0-2.el7.noarch

I monitored the memory usage every minute.
rbd:
Please refer to http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-12/110220.log

local file:
http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-13/120501.log

Comment 26 qing.wang 2021-10-18 07:39:41 UTC
I do not find memory leak on RHEL8 with valgrind

Test local file backend on:
Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
4.18.0-339.el8.x86_64
qemu-kvm-6.0.0-30.module+el8.5.0+12586+476da3e1.x86_64

http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-18/host8-local-24-valgrind-150253.log


Test rbd backend on:
Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
4.18.0-348.el8.x86_64
qemu-kvm-6.0.0-31.module+el8.5.0+12787+aaa8bdfa.x86_64
http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-18/host8-rbd-24-valgrind-150310.log

Reproduce memory leak with local file on RHEL7
Test local file on:
Red Hat Enterprise Linux Server release 7.9 (Maipo)
3.10.0-1160.45.1.el7.x86_64
qemu-kvm-1.5.3-175.el7_9.4.x86_64

http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qbugs/2007036/2021-10-18/host7-local-24-valgrind-150706.log

RBD backend still in testing.

Test step:
1.boot vm 
/usr/libexec/qemu-kvm \
	-name 'avocado-vt-vm1'  \
	-machine pc  \
	-nodefaults  \
	-vga std  \
	-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
	-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel79-64-virtio-scsi.qcow2 \
	-device scsi-hd,id=image1,drive=drive_image1 \
	\
	-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/home/kvm_autotest_root/images/data1.raw \
	-device scsi-hd,id=image2,drive=drive_image2 \
	\
	-device virtio-net-pci,mac=9a:e0:e1:e2:e3:e3,id=idIRlhSc,vectors=4,netdev=ids4KA3w,bus=pci.0,addr=0x5  \
	-netdev tap,id=ids4KA3w,vhost=on \
	-m 4G  \
	\
	-vnc :5  \
	-rtc base=localtime,clock=host,driftfix=slew  \
	-boot menu=off,strict=off,order=cdn,once=c \
	-enable-kvm \
	-monitor stdio \

2.run fio over 24H in guest then stop the fio
if ! mount | grep /home/x; then
  lsblk /dev/sdb
  mkfs.xfs -f /dev/sdb
  mount /dev/sdb /home/x
fi

time1=$(date +%s)

while true; do
  time2=$(date +%s)
  let t=$time2-$time1
  #16H->57600 24->86400
  if (($t > 86400)); then
    echo "over time"
    break
  else
    let x=$t/60
    echo "do fio at $x"
    fio leak.fio
  fi
done
echo "end"


3.quit the qemu

Comment 31 qing.wang 2021-10-19 09:56:04 UTC
8.4 has very small memory leak but not same place with rhel7. 

just run short time fio with small size fio (5G)

The leak size: 8.4 slow 3k, fast 1.5k, rhel7 220k

long time running still in process.

detail: 
leak.fio

[global]
randrepeat=0
filename=/home/x/test.dat
iodepth=8
size=5g
direct=0
ioengine=libaio

[iometer]
stonewall
bs=1M
rw=randrw

================================================
8.4 slow:

Red Hat Enterprise Linux release 8.4 (Ootpa)
4.18.0-305.el8.x86_64
qemu-kvm-4.2.0-48.module+el8.4.0+10368+630e803b.x86_64
seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch


==56460== HEAP SUMMARY:
==56460==     in use at exit: 8,513,153 bytes in 16,653 blocks
==56460==   total heap usage: 654,470 allocs, 637,817 frees, 874,029,370 bytes allocated
==56460== 
==56460== 240 (16 direct, 224 indirect) bytes in 1 blocks are definitely lost in loss record 4,386 of 5,035
==56460==    at 0x4C34F0B: malloc (vg_replace_malloc.c:307)
==56460==    by 0x52CA2A5: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==56460==    by 0x52E1EB6: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==56460==    by 0x52E2F89: g_slist_prepend (in /usr/lib64/libglib-2.0.so.0.5600.4)
==56460==    by 0x6DE1B2: ??? (in /usr/libexec/qemu-kvm)
==56460==    by 0x52B36A7: g_hash_table_foreach (in /usr/lib64/libglib-2.0.so.0.5600.4)
==56460==    by 0x6DF6FB: object_class_foreach (in /usr/libexec/qemu-kvm)
==56460==    by 0x6DF7A5: object_class_get_list (in /usr/libexec/qemu-kvm)
==56460==    by 0x43DB90: main (in /usr/libexec/qemu-kvm)
==56460== 
==56460== 3,040 bytes in 10 blocks are definitely lost in loss record 4,924 of 5,035
==56460==    at 0x4C3721A: calloc (vg_replace_malloc.c:760)
==56460==    by 0x52CA2FD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==56460==    by 0x7CC1A2: qemu_coroutine_new (in /usr/libexec/qemu-kvm)
==56460==    by 0x7CB018: qemu_coroutine_create (in /usr/libexec/qemu-kvm)
==56460==    by 0x742B61: aio_task_pool_start_task (in /usr/libexec/qemu-kvm)
==56460==    by 0x6FF225: ??? (in /usr/libexec/qemu-kvm)
==56460==    by 0x6FF316: ??? (in /usr/libexec/qemu-kvm)
==56460==    by 0x731B28: ??? (in /usr/libexec/qemu-kvm)
==56460==    by 0x733D59: ??? (in /usr/libexec/qemu-kvm)
==56460==    by 0x73454A: bdrv_co_pwritev_part (in /usr/libexec/qemu-kvm)
==56460==    by 0x721127: ??? (in /usr/libexec/qemu-kvm)
==56460==    by 0x721200: ??? (in /usr/libexec/qemu-kvm)
==56460== 
==56460== LEAK SUMMARY:
==56460==    definitely lost: 3,056 bytes in 11 blocks
==56460==    indirectly lost: 224 bytes in 14 blocks
==56460==      possibly lost: 14,376 bytes in 51 blocks
==56460==    still reachable: 8,495,497 bytes in 16,577 blocks
==56460==                       of which reachable via heuristic:
==56460==                         newarray           : 1,536 bytes in 16 blocks
==56460==         suppressed: 0 bytes in 0 blocks
==56460== Reachable blocks (those to which a pointer was found) are not shown.
==56460== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==56460== 
==56460== For lists of detected and suppressed errors, rerun with: -s
==56460== ERROR SUMMARY: 88826 errors from 1026 contexts (suppressed: 0 from 0)


==================================================
8.4 fast
Red Hat Enterprise Linux release 8.4 (Ootpa)
4.18.0-305.el8.x86_64
qemu-kvm-5.2.0-16.module+el8.4.0+12596+209e4022.10.x86_64
seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch

==65466== 
==65466== HEAP SUMMARY:
==65466==     in use at exit: 9,215,870 bytes in 24,389 blocks
==65466==   total heap usage: 496,977 allocs, 472,588 frees, 364,247,899 bytes allocated
==65466== 
==65466== 24 bytes in 1 blocks are definitely lost in loss record 2,138 of 5,473
==65466==    at 0x4C34F0B: malloc (vg_replace_malloc.c:307)
==65466==    by 0x5D732A5: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x5D8AEB6: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x5D8FB56: g_string_sized_new (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x5D900C9: g_string_new (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x7D7F64: get_relocated_path (cutils.c:944)
==65466==    by 0x63740C: qemu_init (vl.c:3971)
==65466==    by 0x42DBBC: main (main.c:49)
==65466== 
==65466== 24 bytes in 1 blocks are definitely lost in loss record 2,139 of 5,473
==65466==    at 0x4C34F0B: malloc (vg_replace_malloc.c:307)
==65466==    by 0x5D732A5: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x5D8AEB6: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x5D8FB56: g_string_sized_new (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x5D900C9: g_string_new (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x7D7F64: get_relocated_path (cutils.c:944)
==65466==    by 0x63772E: find_datadir (vl.c:2873)
==65466==    by 0x63772E: qemu_init (vl.c:3976)
==65466==    by 0x42DBBC: main (main.c:49)
==65466== 
==65466== 304 bytes in 1 blocks are definitely lost in loss record 4,880 of 5,473
==65466==    at 0x4C3721A: calloc (vg_replace_malloc.c:760)
==65466==    by 0x5D732FD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x7D9F82: qemu_coroutine_new (coroutine-ucontext.c:198)
==65466==    by 0x7D1D90: qemu_coroutine_create (qemu-coroutine.c:75)
==65466==    by 0x703D31: aio_task_pool_start_task (aio_task.c:94)
==65466==    by 0x6F6965: qcow2_add_task (qcow2.c:2222)
==65466==    by 0x6F702A: qcow2_co_pwritev_part (qcow2.c:2614)
==65466==    by 0x729F38: bdrv_driver_pwritev (io.c:1123)
==65466==    by 0x72BC67: bdrv_aligned_pwritev (io.c:1945)
==65466==    by 0x72C44A: bdrv_co_pwritev_part (io.c:2113)
==65466==    by 0x7559E7: blk_do_pwritev_part (block-backend.c:1260)
==65466==    by 0x755AC0: blk_aio_write_entry (block-backend.c:1476)
==65466== 
==65466== 1,824 (1,216 direct, 608 indirect) bytes in 4 blocks are definitely lost in loss record 5,298 of 5,473
==65466==    at 0x4C3721A: calloc (vg_replace_malloc.c:760)
==65466==    by 0x5D732FD: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==65466==    by 0x7D9F82: qemu_coroutine_new (coroutine-ucontext.c:198)
==65466==    by 0x7D1D90: qemu_coroutine_create (qemu-coroutine.c:75)
==65466==    by 0x703D31: aio_task_pool_start_task (aio_task.c:94)
==65466==    by 0x6F6965: qcow2_add_task (qcow2.c:2222)
==65466==    by 0x6F8028: qcow2_co_preadv_part (qcow2.c:2320)
==65466==    by 0x727867: bdrv_driver_preadv.constprop.18 (io.c:1054)
==65466==    by 0x72B145: bdrv_aligned_preadv (io.c:1440)
==65466==    by 0x72B6E0: bdrv_co_preadv_part (io.c:1682)
==65466==    by 0x755866: blk_do_preadv (block-backend.c:1211)
==65466==    by 0x755919: blk_aio_read_entry (block-backend.c:1464)
==65466== 
==65466== LEAK SUMMARY:
==65466==    definitely lost: 1,568 bytes in 7 blocks
==65466==    indirectly lost: 608 bytes in 2 blocks
==65466==      possibly lost: 5,838 bytes in 51 blocks
==65466==    still reachable: 9,207,856 bytes in 24,329 blocks
==65466==                       of which reachable via heuristic:
==65466==                         newarray           : 1,536 bytes in 16 blocks
==65466==         suppressed: 0 bytes in 0 blocks
==65466== Reachable blocks (those to which a pointer was found) are not shown.
==65466== To see them, rerun with: --leak-check=full --show-leak-kinds=all

==================================================
rhel7

==11926== 74,368 (65,792 direct, 8,576 indirect) bytes in 514 blocks are definitely lost in loss record 2,512 of 2,528
==11926==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==11926==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==11926==    by 0x67F8D6D: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==11926==    by 0x19754E: qemu_aio_get (block.c:4808)
==11926==    by 0x1D8003: dma_bdrv_io (dma-helpers.c:208)
==11926==    by 0x1D808C: dma_bdrv_read (dma-helpers.c:231)
==11926==    by 0x237AE0: scsi_do_read (scsi-disk.c:360)
==11926==    by 0x317411: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==11926==    by 0x25ED05: qemu_iohandler_poll (iohandler.c:143)
==11926==    by 0x26327F: main_loop_wait (main-loop.c:476)
==11926==    by 0x1818BF: main_loop (vl.c:1997)
==11926==    by 0x1818BF: main (vl.c:4367)
==11926== 
==11926== 155,136 (149,888 direct, 5,248 indirect) bytes in 1,171 blocks are definitely lost in loss record 2,519 of 2,528
==11926==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==11926==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==11926==    by 0x67F8D6D: g_slice_alloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==11926==    by 0x19754E: qemu_aio_get (block.c:4808)
==11926==    by 0x1D8003: dma_bdrv_io (dma-helpers.c:208)
==11926==    by 0x1D80BC: dma_bdrv_write (dma-helpers.c:239)
==11926==    by 0x2382DC: scsi_write_data (scsi-disk.c:530)
==11926==    by 0x317411: virtio_scsi_handle_cmd (virtio-scsi.c:415)
==11926==    by 0x25ED05: qemu_iohandler_poll (iohandler.c:143)
==11926==    by 0x26327F: main_loop_wait (main-loop.c:476)
==11926==    by 0x1818BF: main_loop (vl.c:1997)
==11926==    by 0x1818BF: main (vl.c:4367)
==11926== 
==11926== LEAK SUMMARY:
==11926==    definitely lost: 228,702 bytes in 1,705 blocks
==11926==    indirectly lost: 14,144 bytes in 124 blocks
==11926==      possibly lost: 21,112 bytes in 159 blocks
==11926==    still reachable: 7,752,334 bytes in 4,401 blocks
=====================================================

Comment 33 qing.wang 2021-10-21 09:32:40 UTC
I do a long time test(10H) as comment #31  on 8.4 fast and slow

It has very few memory leak (0-2k),sometimes no memory leak found.
And there is no leak growth in long time running.

It using Bug 2016311 - Few memory leak when fio on disk to track due to there are different leak place.

Comment 50 qing.wang 2021-11-15 09:17:53 UTC
Passed Test on
Red Hat Enterprise Linux Server release 7.9 (Maipo)
3.10.0-1160.45.1.el7.x86_64
qemu-kvm-1.5.3-175.el7_9.5.x86_64

Scenario 1:
execute fio in short time

Scenario 2:
execute fio in long time (about 4H)


It got similar as #39 and did not find memory leak growth

==3652== Memcheck, a memory error detector
==3652== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==3652== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==3652== Command: /usr/sbin/ip link set dev tap0 nomaster
==3652== Parent PID==29880== 
==29880== HEAP SUMMARY:
==29880==     in use at exit: 29,875 bytes in 615 blocks
==29880==   total heap usage: 1,803 allocs, 1,188 frees, 55,755 bytes allocated
==29880== 
==29880== LEAK SUMMARY:
==29880==    definitely lost: 0 bytes in 0 blocks
==29880==    indirectly lost: 0 bytes in 0 blocks
==29880==      possibly lost: 0 bytes in 0 blocks
==29880==    still reachable: 29,875 bytes in 615 blocks
==29880==         suppressed: 0 bytes in 0 blocks
==29880== Reachable blocks (those to which a pointer was found) are not shown.
==29880== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==29880== 
==29880== For lists of detected and suppressed errors, rerun with: -s
==29880== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
==29881== 
==29881== HEAP SUMMARY:
==29881==     in use at exit: 10,304,723 bytes in 4,617 blocks
==29881==   total heap usage: 14,144,898 allocs, 14,140,281 frees, 290,231,042,345 bytes allocated
==29881== 
==29881== Thread 1:
==29881== 8 bytes in 1 blocks are definitely lost in loss record 238 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x1E857E: qemu_extend_irqs (irq.c:51)
==29881==    by 0x30737B: pc_init1 (pc_piix.c:238)
==29881==    by 0x30737B: pc_init_pci (pc_piix.c:254)
==29881==    by 0x1815FE: main (vl.c:4244)
==29881== 
==29881== 14 bytes in 1 blocks are definitely lost in loss record 475 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x67FA77E: g_strdup (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x325ADA: memory_region_init (memory.c:846)
==29881==    by 0x325B27: memory_region_init_io (memory.c:965)
==29881==    by 0x229D36: i440fx_pcihost_initfn (piix.c:263)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881== 
==29881== 16 bytes in 2 blocks are definitely lost in loss record 725 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x1E857E: qemu_extend_irqs (irq.c:51)
==29881==    by 0x21BB15: bmdma_init (pci.c:557)
==29881==    by 0x21BFA5: pci_piix_init_ports (piix.c:143)
==29881==    by 0x21BFA5: pci_piix_ide_initfn (piix.c:164)
==29881==    by 0x22F3C1: pci_qdev_init (pci.c:1723)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881== 
==29881== 24 bytes in 1 blocks are definitely lost in loss record 1,386 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x3D60AD: qint_from_int (qint.c:33)
==29881==    by 0x3D4F13: qmp_output_type_int (qmp-output-visitor.c:156)
==29881==    by 0x3D3707: visit_type_uint8 (qapi-visit-core.c:133)
==29881==    by 0x2A2E5D: object_property_get_qobject (qom-qobject.c:37)
==29881==    by 0x2FC559: acpi_get_pm_info (acpi-build.c:138)
==29881==    by 0x2FC559: acpi_build (acpi-build.c:1072)
==29881==    by 0x2FDC26: acpi_setup (acpi-build.c:1224)
==29881==    by 0x3053B6: pc_guest_info_machine_done (pc.c:1038)
==29881==    by 0x3E4B96: notifier_list_notify (notify.c:39)
==29881==    by 0x181723: qemu_run_machine_init_done_notifiers (vl.c:2706)
==29881==    by 0x181723: main (vl.c:4334)
==29881== 
==29881== 24 bytes in 1 blocks are definitely lost in loss record 1,387 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x3D60AD: qint_from_int (qint.c:33)
==29881==    by 0x3D4F13: qmp_output_type_int (qmp-output-visitor.c:156)
==29881==    by 0x3D3707: visit_type_uint8 (qapi-visit-core.c:133)
==29881==    by 0x2A2E5D: object_property_get_qobject (qom-qobject.c:37)
==29881==    by 0x2FC58C: acpi_get_pm_info (acpi-build.c:144)
==29881==    by 0x2FC58C: acpi_build (acpi-build.c:1072)
==29881==    by 0x2FDC26: acpi_setup (acpi-build.c:1224)
==29881==    by 0x3053B6: pc_guest_info_machine_done (pc.c:1038)
==29881==    by 0x3E4B96: notifier_list_notify (notify.c:39)
==29881==    by 0x181723: qemu_run_machine_init_done_notifiers (vl.c:2706)
==29881==    by 0x181723: main (vl.c:4334)
==29881== 
==29881== 24 bytes in 1 blocks are definitely lost in loss record 1,388 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x3D60AD: qint_from_int (qint.c:33)
==29881==    by 0x3D4F13: qmp_output_type_int (qmp-output-visitor.c:156)
==29881==    by 0x3D3707: visit_type_uint8 (qapi-visit-core.c:133)
==29881==    by 0x2A2E5D: object_property_get_qobject (qom-qobject.c:37)
==29881==    by 0x2FC5BF: acpi_get_pm_info (acpi-build.c:150)
==29881==    by 0x2FC5BF: acpi_build (acpi-build.c:1072)
==29881==    by 0x2FDC26: acpi_setup (acpi-build.c:1224)
==29881==    by 0x3053B6: pc_guest_info_machine_done (pc.c:1038)
==29881==    by 0x3E4B96: notifier_list_notify (notify.c:39)
==29881==    by 0x181723: qemu_run_machine_init_done_notifiers (vl.c:2706)
==29881==    by 0x181723: main (vl.c:4334)
==29881== 
==29881== 24 bytes in 1 blocks are definitely lost in loss record 1,389 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x3D60AD: qint_from_int (qint.c:33)
==29881==    by 0x3D4F13: qmp_output_type_int (qmp-output-visitor.c:156)
==29881==    by 0x3D3707: visit_type_uint8 (qapi-visit-core.c:133)
==29881==    by 0x2A2E5D: object_property_get_qobject (qom-qobject.c:37)
==29881==    by 0x2FC559: acpi_get_pm_info (acpi-build.c:138)
==29881==    by 0x2FC559: acpi_build (acpi-build.c:1072)
==29881==    by 0x2FDB04: acpi_build_update (acpi-build.c:1166)
==29881==    by 0x22717B: fw_cfg_read (fw_cfg.c:259)
==29881==    by 0x227218: fw_cfg_comb_read (fw_cfg.c:434)
==29881==    by 0x3235CB: memory_region_read_accessor (memory.c:317)
==29881==    by 0x323012: access_with_adjusted_size (memory.c:365)
==29881== 
==29881== 24 bytes in 1 blocks are definitely lost in loss record 1,390 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x3D60AD: qint_from_int (qint.c:33)
==29881==    by 0x3D4F13: qmp_output_type_int (qmp-output-visitor.c:156)
==29881==    by 0x3D3707: visit_type_uint8 (qapi-visit-core.c:133)
==29881==    by 0x2A2E5D: object_property_get_qobject (qom-qobject.c:37)
==29881==    by 0x2FC58C: acpi_get_pm_info (acpi-build.c:144)
==29881==    by 0x2FC58C: acpi_build (acpi-build.c:1072)
==29881==    by 0x2FDB04: acpi_build_update (acpi-build.c:1166)
==29881==    by 0x22717B: fw_cfg_read (fw_cfg.c:259)
==29881==    by 0x227218: fw_cfg_comb_read (fw_cfg.c:434)
==29881==    by 0x3235CB: memory_region_read_accessor (memory.c:317)
==29881==    by 0x323012: access_with_adjusted_size (memory.c:365)
==29881== 
==29881== 24 bytes in 1 blocks are definitely lost in loss record 1,391 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x3D60AD: qint_from_int (qint.c:33)
==29881==    by 0x3D4F13: qmp_output_type_int (qmp-output-visitor.c:156)
==29881==    by 0x3D3707: visit_type_uint8 (qapi-visit-core.c:133)
==29881==    by 0x2A2E5D: object_property_get_qobject (qom-qobject.c:37)
==29881==    by 0x2FC5BF: acpi_get_pm_info (acpi-build.c:150)
==29881==    by 0x2FC5BF: acpi_build (acpi-build.c:1072)
==29881==    by 0x2FDB04: acpi_build_update (acpi-build.c:1166)
==29881==    by 0x22717B: fw_cfg_read (fw_cfg.c:259)
==29881==    by 0x227218: fw_cfg_comb_read (fw_cfg.c:434)
==29881==    by 0x3235CB: memory_region_read_accessor (memory.c:317)
==29881==    by 0x323012: access_with_adjusted_size (memory.c:365)
==29881== 
==29881== 88 (56 direct, 32 indirect) bytes in 1 blocks are definitely lost in loss record 1,925 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2200D5: isa_register_portio_list (isa-bus.c:111)
==29881==    by 0x211E2B: dma_init2 (i8257.c:531)
==29881==    by 0x21227D: DMA_init (i8257.c:592)
==29881==    by 0x30678F: pc_basic_device_init (pc.c:1325)
==29881==    by 0x30719A: pc_init1 (pc_piix.c:204)
==29881==    by 0x30719A: pc_init_pci (pc_piix.c:254)
==29881==    by 0x1815FE: main (vl.c:4244)
==29881== 
==29881== 88 (56 direct, 32 indirect) bytes in 1 blocks are definitely lost in loss record 1,926 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2200D5: isa_register_portio_list (isa-bus.c:111)
==29881==    by 0x211E2B: dma_init2 (i8257.c:531)
==29881==    by 0x2121FD: DMA_init (i8257.c:594)
==29881==    by 0x30678F: pc_basic_device_init (pc.c:1325)
==29881==    by 0x30719A: pc_init1 (pc_piix.c:204)
==29881==    by 0x30719A: pc_init_pci (pc_piix.c:254)
==29881==    by 0x1815FE: main (vl.c:4244)
==29881== 
==29881== 88 (56 direct, 32 indirect) bytes in 1 blocks are definitely lost in loss record 1,927 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2200D5: isa_register_portio_list (isa-bus.c:111)
==29881==    by 0x1E34DF: isabus_fdc_init1 (fdc.c:2131)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881==    by 0x1EE948: qdev_init_nofail (qdev.c:277)
==29881==    by 0x1E529A: fdctrl_init_isa (fdc.c:2043)
==29881== 
==29881== 104 (56 direct, 48 indirect) bytes in 1 blocks are definitely lost in loss record 2,014 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2FBC92: vga_init (vga.c:2266)
==29881==    by 0x21163F: pci_std_vga_initfn (vga-pci.c:151)
==29881==    by 0x22F3C1: pci_qdev_init (pci.c:1723)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881==    by 0x1EE948: qdev_init_nofail (qdev.c:277)
==29881== 
==29881== 128 bytes in 1 blocks are definitely lost in loss record 2,065 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x1E857E: qemu_extend_irqs (irq.c:51)
==29881==    by 0x3073F1: pc_init1 (pc_piix.c:180)
==29881==    by 0x3073F1: pc_init_pci (pc_piix.c:254)
==29881==    by 0x1815FE: main (vl.c:4244)
==29881== 
==29881== 136 (56 direct, 80 indirect) bytes in 1 blocks are definitely lost in loss record 2,071 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2FBC83: vga_init (vga.c:2265)
==29881==    by 0x21163F: pci_std_vga_initfn (vga-pci.c:151)
==29881==    by 0x22F3C1: pci_qdev_init (pci.c:1723)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881==    by 0x1EE948: qdev_init_nofail (qdev.c:277)
==29881== 
==29881== 144 (112 direct, 32 indirect) bytes in 2 blocks are definitely lost in loss record 2,078 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2200D5: isa_register_portio_list (isa-bus.c:111)
==29881==    by 0x21BF81: pci_piix_init_ports (piix.c:139)
==29881==    by 0x21BF81: pci_piix_ide_initfn (piix.c:164)
==29881==    by 0x22F3C1: pci_qdev_init (pci.c:1723)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881==    by 0x1EE948: qdev_init_nofail (qdev.c:277)
==29881== 
==29881== 208 (112 direct, 96 indirect) bytes in 2 blocks are definitely lost in loss record 2,181 of 2,548
==29881==    at 0x4C29F73: malloc (vg_replace_malloc.c:309)
==29881==    by 0x67E175D: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2200D5: isa_register_portio_list (isa-bus.c:111)
==29881==    by 0x21AD98: ide_init_ioport (core.c:2266)
==29881==    by 0x21BF81: pci_piix_init_ports (piix.c:139)
==29881==    by 0x21BF81: pci_piix_ide_initfn (piix.c:164)
==29881==    by 0x22F3C1: pci_qdev_init (pci.c:1723)
==29881==    by 0x1EE3C3: device_realize (qdev.c:178)
==29881==    by 0x1EF32A: device_set_realized (qdev.c:693)
==29881==    by 0x2A0D1D: property_set_bool (object.c:1302)
==29881==    by 0x2A2DF6: object_property_set_qobject (qom-qobject.c:24)
==29881==    by 0x2A1FDF: object_property_set_bool (object.c:853)
==29881==    by 0x1EE799: qdev_init (qdev.c:163)
==29881== 
==29881== 4,096 bytes in 1 blocks are definitely lost in loss record 2,488 of 2,548
==29881==    at 0x4C2C291: realloc (vg_replace_malloc.c:836)
==29881==    by 0x67E1805: g_realloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x67AE522: ??? (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x67AEE53: g_array_set_size (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2FD456: acpi_align_size (acpi-build.c:452)
==29881==    by 0x2FD456: acpi_build (acpi-build.c:1147)
==29881==    by 0x2FDC26: acpi_setup (acpi-build.c:1224)
==29881==    by 0x3053B6: pc_guest_info_machine_done (pc.c:1038)
==29881==    by 0x3E4B96: notifier_list_notify (notify.c:39)
==29881==    by 0x181723: qemu_run_machine_init_done_notifiers (vl.c:2706)
==29881==    by 0x181723: main (vl.c:4334)
==29881== 
==29881== 8,192 bytes in 1 blocks are definitely lost in loss record 2,502 of 2,548
==29881==    at 0x4C2C291: realloc (vg_replace_malloc.c:836)
==29881==    by 0x67E1805: g_realloc (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x67AE522: ??? (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x67AEE53: g_array_set_size (in /usr/lib64/libglib-2.0.so.0.5600.1)
==29881==    by 0x2FC8B1: acpi_data_push (acpi-build.c:435)
==29881==    by 0x2FC8B1: build_dsdt (acpi-build.c:945)
==29881==    by 0x2FC8B1: acpi_build (acpi-build.c:1097)
==29881==    by 0x2FDC26: acpi_setup (acpi-build.c:1224)
==29881==    by 0x3053B6: pc_guest_info_machine_done (pc.c:1038)
==29881==    by 0x3E4B96: notifier_list_notify (notify.c:39)
==29881==    by 0x181723: qemu_run_machine_init_done_notifiers (vl.c:2706)
==29881==    by 0x181723: main (vl.c:4334)
==29881== 
==29881== LEAK SUMMARY:
==29881==    definitely lost: 13,102 bytes in 22 blocks
==29881==    indirectly lost: 352 bytes in 18 blocks
==29881==      possibly lost: 3,848 bytes in 21 blocks
==29881==    still reachable: 10,287,421 bytes in 4,556 blocks
==29881==                       of which reachable via heuristic:
==29881==                         stdstring          : 30 bytes in 1 blocks
==29881==                         newarray           : 1,536 bytes in 16 blocks
==29881==         suppressed: 0 bytes in 0 blocks

Comment 54 errata-xmlrpc 2021-11-23 17:17:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (qemu-kvm bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4797


Note You need to log in before you can comment on or make changes to this bug.