RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1324757 - libvirtd crashed if destroy then start a guest which have redirdev device
Summary: libvirtd crashed if destroy then start a guest which have redirdev device
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1326990 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-07 08:29 UTC by Luyao Huang
Modified: 2016-11-03 18:41 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.3.3-2.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 18:41:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Luyao Huang 2016-04-07 08:29:40 UTC
Description of problem:

libvirtd crashed if destroy then start a guest which  have redirdev device

Version-Release number of selected component (if applicable):

libvirt-1.3.3-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.prepare a guest like this:

# virsh dumpxml rhel7.0-rhel
  <cpu>
    <numa>
      <cell id='0' cpus='0,2' memory='512000' unit='KiB'/>
      <cell id='1' cpus='1,3' memory='512000' unit='KiB'/>
    </numa>
  </cpu>

    <redirdev bus='usb' type='pty'>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
    </redirdev>

    <memory model='dimm'>
      <target>
        <size unit='KiB'>141072</size>
        <node>0</node>
      </target>
    </memory>

2. start guest
# virsh start rhel7.0-rhel
Domain rhel7.0-rhel started

3. edit guest to make memory device target node to 2 which is not valid, guest will fail to start

# virsh edit rhel7.0-rhel
Domain rhel7.0-rhel XML configuration edited.

4. destroy and restart guest
# virsh destroy rhel7.0-rhel
Domain rhel7.0-rhel destroyed

# virsh start rhel7.0-rhel
error: Disconnected from qemu:///system due to I/O error
error: Failed to start domain rhel7.0-rhel
error: End of file while reading data: Input/output error


Actual results:

libvirtd crashed

Expected results:

report error

Additional info:

valgrind:

==13800== Thread 4:
==13800== Invalid read of size 1
==13800==    at 0x54AFF0E: virPerfFree (virperf.c:87)
==13800==    by 0x1E660A3F: qemuProcessStop (qemu_process.c:5961)
==13800==    by 0x1E6628EC: qemuProcessStart (qemu_process.c:5609)
==13800==    by 0x1E6BF147: qemuDomainObjStart.constprop.47 (qemu_driver.c:7195)
==13800==    by 0x1E6BF885: qemuDomainCreateWithFlags (qemu_driver.c:7249)
==13800==    by 0x55B578B: virDomainCreate (libvirt-domain.c:6787)
==13800==    by 0x14669A: remoteDispatchDomainCreate (remote_dispatch.h:4063)
==13800==    by 0x14669A: remoteDispatchDomainCreateHelper (remote_dispatch.h:4039)
==13800==    by 0x561F2E1: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==13800==    by 0x561F2E1: virNetServerProgramDispatch (virnetserverprogram.c:307)
==13800==    by 0x561A49C: virNetServerProcessMsg (virnetserver.c:137)
==13800==    by 0x561A49C: virNetServerHandleJob (virnetserver.c:158)
==13800==    by 0x55111C4: virThreadPoolWorker (virthreadpool.c:145)
==13800==    by 0x55106E7: virThreadHelper (virthread.c:206)
==13800==    by 0x807EDC4: start_thread (in /usr/lib64/libpthread-2.17.so)
==13800==  Address 0x231ddd28 is 8 bytes inside a block of size 16 free'd
==13800==    at 0x4C2AD17: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==13800==    by 0x54AC2A9: virFree (viralloc.c:582)
==13800==    by 0x54AFF1D: virPerfFree (virperf.c:91)
==13800==    by 0x1E660A3F: qemuProcessStop (qemu_process.c:5961)
==13800==    by 0x1E6B06A7: qemuDomainDestroyFlags (qemu_driver.c:2291)
==13800==    by 0x55A2EAB: virDomainDestroy (libvirt-domain.c:479)
==13800==    by 0x1462AA: remoteDispatchDomainDestroy (remote_dispatch.h:4456)
==13800==    by 0x1462AA: remoteDispatchDomainDestroyHelper (remote_dispatch.h:4432)
==13800==    by 0x561F2E1: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==13800==    by 0x561F2E1: virNetServerProgramDispatch (virnetserverprogram.c:307)
==13800==    by 0x561A49C: virNetServerProcessMsg (virnetserver.c:137)
==13800==    by 0x561A49C: virNetServerHandleJob (virnetserver.c:158)
==13800==    by 0x55111C4: virThreadPoolWorker (virthreadpool.c:145)
==13800==    by 0x55106E7: virThreadHelper (virthread.c:206)
==13800==    by 0x807EDC4: start_thread (in /usr/lib64/libpthread-2.17.so)
==13800== 
==13800== Invalid free() / delete / delete[] / realloc()
==13800==    at 0x4C2AD17: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==13800==    by 0x54AC2A9: virFree (viralloc.c:582)
==13800==    by 0x54AFF1D: virPerfFree (virperf.c:91)
==13800==    by 0x1E660A3F: qemuProcessStop (qemu_process.c:5961)
==13800==    by 0x1E6628EC: qemuProcessStart (qemu_process.c:5609)
==13800==    by 0x1E6BF147: qemuDomainObjStart.constprop.47 (qemu_driver.c:7195)
==13800==    by 0x1E6BF885: qemuDomainCreateWithFlags (qemu_driver.c:7249)
==13800==    by 0x55B578B: virDomainCreate (libvirt-domain.c:6787)
==13800==    by 0x14669A: remoteDispatchDomainCreate (remote_dispatch.h:4063)
==13800==    by 0x14669A: remoteDispatchDomainCreateHelper (remote_dispatch.h:4039)
==13800==    by 0x561F2E1: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==13800==    by 0x561F2E1: virNetServerProgramDispatch (virnetserverprogram.c:307)
==13800==    by 0x561A49C: virNetServerProcessMsg (virnetserver.c:137)
==13800==    by 0x561A49C: virNetServerHandleJob (virnetserver.c:158)
==13800==    by 0x55111C4: virThreadPoolWorker (virthreadpool.c:145)
==13800==  Address 0x231ddd20 is 0 bytes inside a block of size 16 free'd
==13800==    at 0x4C2AD17: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==13800==    by 0x54AC2A9: virFree (viralloc.c:582)
==13800==    by 0x54AFF1D: virPerfFree (virperf.c:91)
==13800==    by 0x1E660A3F: qemuProcessStop (qemu_process.c:5961)
==13800==    by 0x1E6B06A7: qemuDomainDestroyFlags (qemu_driver.c:2291)
==13800==    by 0x55A2EAB: virDomainDestroy (libvirt-domain.c:479)
==13800==    by 0x1462AA: remoteDispatchDomainDestroy (remote_dispatch.h:4456)
==13800==    by 0x1462AA: remoteDispatchDomainDestroyHelper (remote_dispatch.h:4432)
==13800==    by 0x561F2E1: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
==13800==    by 0x561F2E1: virNetServerProgramDispatch (virnetserverprogram.c:307)
==13800==    by 0x561A49C: virNetServerProcessMsg (virnetserver.c:137)
==13800==    by 0x561A49C: virNetServerHandleJob (virnetserver.c:158)
==13800==    by 0x55111C4: virThreadPoolWorker (virthreadpool.c:145)
==13800==    by 0x55106E7: virThreadHelper (virthread.c:206)
==13800==    by 0x807EDC4: start_thread (in /usr/lib64/libpthread-2.17.so)
==13800== 

gdb:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f906404a700 (LWP 14088)]
0x00007f9070fffaad in malloc_consolidate () from /lib64/libc.so.6
(gdb) bt
#0  0x00007f9070fffaad in malloc_consolidate () from /lib64/libc.so.6
#1  0x00007f9071000f06 in _int_free () from /lib64/libc.so.6
#2  0x00007f9073cea2aa in virFree (ptrptr=ptrptr@entry=0x7f9064049608) at util/viralloc.c:582
#3  0x00007f9073d6dd9f in virDomainRedirdevDefFree (def=0x7f903801b340) at conf/domain_conf.c:2239
#4  0x00007f9073d7f2e4 in virDomainDefFree (def=0x7f9038014af0) at conf/domain_conf.c:2567
#5  0x00007f905b055bc0 in qemuProcessStop (driver=driver@entry=0x7f9050128a60, vm=vm@entry=0x7f90501ea450, reason=reason@entry=VIR_DOMAIN_SHUTOFF_FAILED, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_START, 
    flags=flags@entry=2) at qemu/qemu_process.c:6030
#6  0x00007f905b0578ed in qemuProcessStart (conn=conn@entry=0x7f9044000ac0, driver=driver@entry=0x7f9050128a60, vm=vm@entry=0x7f90501ea450, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_START, 
    migrateFrom=migrateFrom@entry=0x0, migrateFd=migrateFd@entry=-1, migratePath=migratePath@entry=0x0, snapshot=snapshot@entry=0x0, vmop=vmop@entry=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=flags@entry=1)
    at qemu/qemu_process.c:5609
#7  0x00007f905b0b4148 in qemuDomainObjStart (conn=0x7f9044000ac0, driver=driver@entry=0x7f9050128a60, vm=0x7f90501ea450, flags=flags@entry=0, asyncJob=QEMU_ASYNC_JOB_START) at qemu/qemu_driver.c:7195
#8  0x00007f905b0b4886 in qemuDomainCreateWithFlags (dom=0x7f904c002dd0, flags=0) at qemu/qemu_driver.c:7249
#9  0x00007f9073df378c in virDomainCreate (domain=domain@entry=0x7f904c002dd0) at libvirt-domain.c:6787
#10 0x00007f9074a6269b in remoteDispatchDomainCreate (server=0x7f9076498ab0, msg=0x7f90764b0300, args=<optimized out>, rerr=0x7f9064049c30, client=0x7f90764b0370) at remote_dispatch.h:4063
#11 remoteDispatchDomainCreateHelper (server=0x7f9076498ab0, client=0x7f90764b0370, msg=0x7f90764b0300, rerr=0x7f9064049c30, args=<optimized out>, ret=0x7f904c000d50) at remote_dispatch.h:4039
#12 0x00007f9073e5d2e2 in virNetServerProgramDispatchCall (msg=0x7f90764b0300, client=0x7f90764b0370, server=0x7f9076498ab0, prog=0x7f90764ac2a0) at rpc/virnetserverprogram.c:437
#13 virNetServerProgramDispatch (prog=0x7f90764ac2a0, server=server@entry=0x7f9076498ab0, client=0x7f90764b0370, msg=0x7f90764b0300) at rpc/virnetserverprogram.c:307
#14 0x00007f9073e5849d in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7f9076498ab0) at rpc/virnetserver.c:137
#15 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f9076498ab0) at rpc/virnetserver.c:158
#16 0x00007f9073d4f1c5 in virThreadPoolWorker (opaque=opaque@entry=0x7f907648d440) at util/virthreadpool.c:145
#17 0x00007f9073d4e6e8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#18 0x00007f9071353dc5 in start_thread () from /lib64/libpthread.so.0
#19 0x00007f907107a28d in clone () from /lib64/libc.so.6

Comment 1 Peter Krempa 2016-04-07 11:28:04 UTC
Fixed upstream:

commit 03e8d5fb54c7c897225ed9ea56d83b894930f144
Author: Peter Krempa <pkrempa>
Date:   Thu Apr 7 12:50:15 2016 +0200

    qemu: perf: Fix crash/memory corruption on failed VM start
    
    The new perf code didn't bother to clear a pointer in 'priv' causing a
    double free or other memory corruption goodness if a VM failed to start.
    
    Clear the pointer after freeing the memory.

Comment 2 Jiri Denemark 2016-04-14 06:49:08 UTC
*** Bug 1326990 has been marked as a duplicate of this bug. ***

Comment 4 Luyao Huang 2016-08-10 07:33:05 UTC
Verify this bug with libvirt-2.0.0-4.el7.x86_64:

0. use valgrind start libvirtd

1. prepare a guest which have memory device
# virsh dumpxml r7

  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static' current='6'>10</vcpu>

  <cpu>
    <numa>
      <cell id='0' cpus='0-4' memory='524288' unit='KiB'/>
      <cell id='1' cpus='5-9' memory='524288' unit='KiB'/>
    </numa>
  </cpu>

    <redirdev bus='usb' type='pty'>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
    </redirdev>

   <memory model='dimm'>
      <target>
        <size unit='KiB'>1048576</size>
        <node>1</node>
      </target>
    </memory>

2. start guest:

# virsh start r7
Domain r7 started

3. edit guest to make it failed to start in next time:

# virsh edit r7

    <memory model='dimm'>
      <target>
        <size unit='KiB'>1048576</size>
        <node>3</node>
      </target>
    </memory>

Domain r7 XML configuration edited.

4. destroy guest and start guest again:

# virsh destroy r7
Domain r7 destroyed

# virsh start r7
error: Failed to start domain r7
error: unsupported configuration: can't add memory backend for guest node '3' as the guest has only '2' NUMA nodes configured

5. check valgrind report there is no invalid memory access in report

Comment 6 errata-xmlrpc 2016-11-03 18:41:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.