Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 816006

Summary: memleak in qbus
Product: Red Hat Enterprise Linux 7 Reporter: Shaolong Hu <shu>
Component: qemu-kvm-rhevAssignee: Markus Armbruster <armbru>
Status: CLOSED CURRENTRELEASE QA Contact: Yumei Huang <yuhuang>
Severity: low Docs Contact:
Priority: low    
Version: 7.0CC: areis, chayang, hhuang, jinzhao, juzhang, kraxel, michen, mkenneth, rbalakri, rpacheco, shuang, virt-maint, yuhuang
Target Milestone: rc   
Target Release: 7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-29 07:22:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
valgrind log of hotplug/unplug virtio serial bus and port
none
valgrind log of hotplug/unplug intel-hda
none
base cmd
none
base cmd with -nodefaults
none
base cmd plug intel-hda
none
base cmd plug virtio-serial
none
base cmd with -nodefaults plug intel-hda
none
base cmd with -nodefaults plug virtio-serial none

Description Shaolong Hu 2012-04-25 03:28:27 UTC
Description of problem:
-------------------------
There is small amount memleak in qbus, talking with Amit and Armbruster this is a minor issue, report this to 6.4 as a tracker.


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
qemu-kvm-rhev-0.12.1.2-2.282.el6.x86_64


How reproducible:
------------------
100%

Steps to Reproduce:
--------------------
1. use valgrind:

hotplug intel-hda then unplug it:
---------------------------------------
(qemu) device_add intel-hda,id=audio1
(qemu) device_del audio1
(qemu) quit
--------------------------------
==26464== 22 bytes in 1 blocks are definitely lost in loss record 92 of 463
==26464==    at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==26464==    by 0x19AB04: qemu_malloc (qemu-malloc.c:57)
==26464==    by 0x1EC834: qbus_create_inplace (qdev.c:741)
==26464==    by 0x2C0E5F: hda_codec_bus_init (intel-hda.c:42)
==26464==    by 0x2C0F09: intel_hda_init (intel-hda.c:1154)
==26464==    by 0x179A45: pci_qdev_init (pci.c:1528)
==26464==    by 0x1ECE57: qdev_init (qdev.c:284)
==26464==    by 0x1ED26E: qdev_device_add (qdev.c:259)
==26464==    by 0x1ED86A: do_device_add (qdev.c:875)
==26464==    by 0x171B7F: monitor_call_handler (monitor.c:4178)
==26464==    by 0x176E9E: handle_user_command (monitor.c:4215)
==26464==    by 0x176FD9: monitor_command_cb (monitor.c:4838)
-----------------------------------------------------------------
==26464== LEAK SUMMARY:
==26464==    definitely lost: 65 bytes in 5 blocks
==26464==    indirectly lost: 768 bytes in 2 blocks
==26464==      possibly lost: 1,056 bytes in 3 blocks
==26464==    still reachable: 4,341,182,020 bytes in 2,189 blocks
==26464==         suppressed: 0 bytes in 0 blocks



hotplug virtio serial and virtio port then unplug them:
--------------------------------------------------------
(qemu) device_add virtio-serial-pci,id=virtio-serial1
(qemu) device_add virtserialport,bus=virtio-serial1.0,nr=1,chardev=channel1,name=com.redhat.rhevm.vdsm.2,id=port1
(qemu) device_del port1
(qemu) device_del virtio-serial1
(qemu) quit
--------------------------------------------------------
==26330== 30 bytes in 1 blocks are definitely lost in loss record 99 of 465
==26330==    at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==26330==    by 0x19AB04: qemu_malloc (qemu-malloc.c:57)
==26330==    by 0x1EC834: qbus_create_inplace (qdev.c:741)
==26330==    by 0x18259E: virtio_serial_init (virtio-serial-bus.c:918)
==26330==    by 0x18238B: virtio_serial_init_pci (virtio-pci.c:868)
==26330==    by 0x179A45: pci_qdev_init (pci.c:1528)
==26330==    by 0x1ECE57: qdev_init (qdev.c:284)
==26330==    by 0x1ED26E: qdev_device_add (qdev.c:259)
==26330==    by 0x1ED86A: do_device_add (qdev.c:875)
==26330==    by 0x171B7F: monitor_call_handler (monitor.c:4178)
==26330==    by 0x176E9E: handle_user_command (monitor.c:4215)
==26330==    by 0x176FD9: monitor_command_cb (monitor.c:4838)
--------------------------------------------------------------------
==26330== LEAK SUMMARY:
==26330==    definitely lost: 130 bytes in 9 blocks
==26330==    indirectly lost: 768 bytes in 2 blocks
==26330==      possibly lost: 704 bytes in 2 blocks
==26330==    still reachable: 4,341,003,517 bytes in 2,202 blocks
==26330==         suppressed: 0 bytes in 0 blocks

Comment 2 Shaolong Hu 2012-04-25 03:32:48 UTC
Created attachment 580055 [details]
valgrind log of hotplug/unplug virtio serial bus and port

Comment 3 Shaolong Hu 2012-04-25 03:34:12 UTC
Created attachment 580056 [details]
valgrind log of hotplug/unplug intel-hda

Comment 5 Markus Armbruster 2012-07-17 11:08:44 UTC
Please provide a full reproducer: exact command line in addition to all the monitor commands.  If you want to go the extra mile, retest with the current version.

Comment 6 Shaolong Hu 2012-07-18 03:15:02 UTC
cmd to reproduce:

valgrind --tool=memcheck --leak-check=yes --log-file=valgrind.log /usr/libexec/qemu-kvm -monitor stdio -nodefconfig -hda RHEL-Server-6.3-64-virtio.qcow2 -smp 2 -m 1G -enable-kvm -chardev socket,id=channel1,path=/tmp/s1,server,nowait

1. boot guest with above cmd

2. in qemu monitor, enter cmd:

(qemu) device_add intel-hda,id=audio1
(qemu) device_del audio1
(qemu) quit

and

(qemu) device_add virtio-serial-pci,id=virtio-serial1
(qemu) device_add virtserialport,bus=virtio-serial1.0,nr=1,chardev=channel1,name=com.redhat.rhevm.vdsm.2,id=port1
(qemu) device_del port1
(qemu) device_del virtio-serial1
(qemu) quit

3. view valgrind.log will show result in comment0.


Test with the latest qemu-kvm-0.12.1.2-2.297.el6.x86_64, problem still exists.


BTW: if add "-nodefaults" in cmd, memleak in comment0 won't occur, i try to use this option to reduce valgrind output, but i find can not reproduce with this option.

Comment 7 Markus Armbruster 2012-07-23 15:38:41 UTC
I could use a little more help, just to make sure I'm seeing exactly what you see.

First, run baseline test: same command line, but quit in the monitor right away.

Then repeat all three tests with QEMU option -S.

Attach the resulting six valgrind logs.

Thanks in advance!

Comment 8 Shaolong Hu 2012-07-26 03:20:05 UTC
Created attachment 600426 [details]
base cmd

Comment 9 Shaolong Hu 2012-07-26 03:21:19 UTC
Created attachment 600427 [details]
base cmd with -nodefaults

Comment 10 Shaolong Hu 2012-07-26 03:24:08 UTC
Created attachment 600428 [details]
base cmd plug intel-hda

Comment 11 Shaolong Hu 2012-07-26 03:25:18 UTC
Created attachment 600429 [details]
base cmd plug virtio-serial

Comment 12 Shaolong Hu 2012-07-26 03:30:07 UTC
Created attachment 600430 [details]
base cmd with -nodefaults plug intel-hda

Comment 13 Shaolong Hu 2012-07-26 03:31:28 UTC
Created attachment 600431 [details]
base cmd with -nodefaults plug virtio-serial

Comment 14 Shaolong Hu 2012-07-26 03:34:06 UTC
Here are the six combination, hope i understand what you need correctly.

Comment 15 Markus Armbruster 2012-07-27 15:30:31 UTC
Looks like you ran the three test cases "no hotplug, hotplug intel-hda, hotplug virtio-serial" with and without -nodefaults.  That's useful, thanks.

I also asked for results with -S.  I think running the three test cases with -nodefaults -S would suffice.  But before you do that, let me try to pinpoint the bug with the information I already have.

Comment 19 Markus Armbruster 2013-09-12 15:18:34 UTC
The code changed a lot since this bug was reported against RHEL-6.  Could you please retest with current RHEL-7 code?

Comment 20 Shaolong Hu 2013-09-13 06:32:32 UTC
Test with qemu-kvm-0.12.1.2-2.401.el6.x86_64:

intel-hda:

==25013== 22 bytes in 1 blocks are definitely lost in loss record 121 of 535
==25013==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==25013==    by 0x1C5704: qemu_malloc (qemu-malloc.c:57)
==25013==    by 0x2225B4: qbus_create_inplace (qdev.c:741)
==25013==    by 0x2FD7BF: hda_codec_bus_init (intel-hda.c:42)
==25013==    by 0x2FD869: intel_hda_init (intel-hda.c:1154)
==25013==    by 0x19FD95: pci_qdev_init (pci.c:1528)
==25013==    by 0x222BD7: qdev_init (qdev.c:284)
==25013==    by 0x222FEE: qdev_device_add (qdev.c:259)
==25013==    by 0x2235EA: do_device_add (qdev.c:875)
==25013==    by 0x197AFF: monitor_call_handler (monitor.c:4369)
==25013==    by 0x19CF8E: handle_user_command (monitor.c:4406)
==25013==    by 0x19D0C6: monitor_command_cb (monitor.c:5044)

virtio-serial:

==25202== 30 bytes in 1 blocks are definitely lost in loss record 128 of 538
==25202==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==25202==    by 0x1C5704: qemu_malloc (qemu-malloc.c:57)
==25202==    by 0x2225B4: qbus_create_inplace (qdev.c:741)
==25202==    by 0x1A910E: virtio_serial_init (virtio-serial-bus.c:952)
==25202==    by 0x1A8E1B: virtio_serial_init_pci (virtio-pci.c:866)
==25202==    by 0x19FD95: pci_qdev_init (pci.c:1528)
==25202==    by 0x222BD7: qdev_init (qdev.c:284)
==25202==    by 0x222FEE: qdev_device_add (qdev.c:259)
==25202==    by 0x2235EA: do_device_add (qdev.c:875)
==25202==    by 0x197AFF: monitor_call_handler (monitor.c:4369)
==25202==    by 0x19CF8E: handle_user_command (monitor.c:4406)
==25202==    by 0x19D0C6: monitor_command_cb (monitor.c:5044)

Comment 21 Markus Armbruster 2013-09-27 13:59:23 UTC
Looks like you retested with the latest RHEL-6 bits, not the latest RHEL-7 bits I asked for.  Could you try that, too?

Comment 22 Shaolong Hu 2013-09-29 07:20:12 UTC
(In reply to Markus Armbruster from comment #21)
> Looks like you retested with the latest RHEL-6 bits, not the latest RHEL-7
> bits I asked for.  Could you try that, too?

Oh sorry, here the result on RHEL7 host:

1. the intel-hda problem has gone

2. virtio serial has a little change:

==21806== 17 bytes in 1 blocks are definitely lost in loss record 512 of 1,392
==21806==    at 0x4C28409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21806==    by 0x2E2E1C: malloc_and_trace (vl.c:2800)
==21806==    by 0x528989E: g_malloc (in /usr/lib64/libglib-2.0.so.0.3600.3)
==21806==    by 0x52A0BAE: g_strdup (in /usr/lib64/libglib-2.0.so.0.3600.3)
==21806==    by 0x33B704: virtio_device_set_child_bus_name (virtio.c:1109)
==21806==    by 0x26FD00: virtio_serial_pci_init (virtio-pci.c:1338)
==21806==    by 0x2703D1: virtio_pci_init (virtio-pci.c:996)
==21806==    by 0x237FBA: pci_qdev_init (pci.c:1720)
==21806==    by 0x1F18D0: device_realize (qdev.c:178)
==21806==    by 0x1F2E3A: device_set_realized (qdev.c:699)
==21806==    by 0x2B01CD: property_set_bool (object.c:1301)
==21806==    by 0x2B2AB6: object_property_set_qobject (qom-qobject.c:24)


==21806== 352 (64 direct, 288 indirect) bytes in 1 blocks are definitely lost in loss record 1,175 of 1,392
==21806==    at 0x4C28409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21806==    by 0x2E2E1C: malloc_and_trace (vl.c:2800)
==21806==    by 0x5289385: ??? (in /usr/lib64/libglib-2.0.so.0.3600.3)
==21806==    by 0x52898F6: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.3600.3)
==21806==    by 0x2B13D9: object_property_add (object.c:717)
==21806==    by 0x1F28CC: qdev_property_add_static (qdev.c:651)
==21806==    by 0x1F2B1C: device_initfn (qdev.c:750)
==21806==    by 0x2B0298: object_init_with_type (object.c:293)
==21806==    by 0x2B0298: object_init_with_type (object.c:293)
==21806==    by 0x26F9BD: virtio_serial_pci_instance_init (virtio-pci.c:1375)
==21806==    by 0x2B093B: object_new_with_type (object.c:413)
==21806==    by 0x29D49C: qdev_device_add (qdev-monitor.c:473)


on qemu-kvm-1.5.3-3.el7.x86_64

Comment 28 Markus Armbruster 2017-11-28 15:49:14 UTC
I can't reproduce this issue with latest upstream or qemu-kvm-rhev anymore.  QA, can you confirm this?  I think we can close this BZ if you can.

Comment 29 Ademar Reis 2017-11-28 17:10:15 UTC
(In reply to Markus Armbruster from comment #28)
> I can't reproduce this issue with latest upstream or qemu-kvm-rhev anymore. 
> QA, can you confirm this?  I think we can close this BZ if you can.

The bug was originally opened against qemu-kvm, but we don't care about this small memleak in downstream base RHEL, we can fix it upstream and wait for a rebase.

So I'm changing the component to qemu-kvm-rehv. If we can't reproduce it anymore, then let's close it as CURRENTRELEASE.

Comment 30 Yumei Huang 2017-11-29 06:01:38 UTC
QE tried with qemu-kvm-rhev-2.10.0-9.el7, and can't reproduce with or without '-nodefaults'.  No "qbus" or "qdev_device_add" showed in valgrind log.

Comment 31 Markus Armbruster 2017-11-29 07:22:08 UTC
Thank you!  Closing out.