RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1162097 - crash after attempted spice channel hotplug
Summary: crash after attempted spice channel hotplug
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-10 09:25 UTC by CongDong
Modified: 2015-03-05 07:47 UTC (History)
8 users (show)

Fixed In Version: libvirt-1.2.8-7.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 07:47:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description CongDong 2014-11-10 09:25:42 UTC
Description of problem:
After attach spicevmc channel to a geust, then destroy it,
libvirt will core dump

Version-Release number of selected component (if applicable):
libvirt-1.2.8-6.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare a guest and spicevmc.xml
# cat spicevmc.xml
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
2. # virsh attach-device $vm spicevmc.xml
error: Failed to attach device from spicevmc.xml
error: invalid argument: device not present in domain configuration
3. # virsh destroy 7
error: Failed to destroy domain 7
error: End of file while reading data: Input/output error
error: Failed to reconnect to the hypervisor


Actual results:
As steps, libvirt core dump

Expected results:
libvirt should not core dump

Additional info:
(gdb) run
Starting program: /usr/sbin/libvirtd 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fffe8b9e700 (LWP 6986)]
[New Thread 0x7fffe839d700 (LWP 6987)]
[New Thread 0x7fffe7b9c700 (LWP 6988)]
[New Thread 0x7fffe739b700 (LWP 6989)]
[New Thread 0x7fffe6b9a700 (LWP 6990)]
[New Thread 0x7fffe6399700 (LWP 6991)]
[New Thread 0x7fffe5b98700 (LWP 6992)]
[New Thread 0x7fffe5397700 (LWP 6993)]
[New Thread 0x7fffe4b96700 (LWP 6994)]
[New Thread 0x7fffe4395700 (LWP 6995)]
[New Thread 0x7fffdf47a700 (LWP 6996)]
Detaching after fork from child process 6997.
Detaching after fork from child process 6998.
Detaching after fork from child process 7064.
Detaching after fork from child process 7065.
Detaching after fork from child process 7066.
Detaching after fork from child process 7067.
Detaching after fork from child process 7068.
[Thread 0x7fffdf47a700 (LWP 6996) exited]
Detaching after fork from child process 7084.
Detaching after fork from child process 7089.
2014-11-10 08:38:36.736+0000: 6990: info : libvirt version: 1.2.8, package: 6.el7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2014-11-04-08:28:35, x86-020.build.eng.bos.redhat.com)
2014-11-10 08:38:36.736+0000: 6990: error : qemuMonitorJSONAttachCharDevCommand:6094 : operation failed: Unsupported char device type '10'
2014-11-10 08:38:36.736+0000: 6990: error : qemuDomainChrRemove:1450 : invalid argument: device not present in domain configuration

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe8b9e700 (LWP 6986)]
0x00007ffff464f52c in free () from /lib64/libc.so.6
(gdb) t a a bt

Thread 11 (Thread 0x7fffe4395700 (LWP 6995)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b77b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7fffe4b96700 (LWP 6994)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b77b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7fffe5397700 (LWP 6993)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
---Type <return> to continue, or q <return> to quit---
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b77b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7fffe5b98700 (LWP 6992)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b77b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7fffe6399700 (LWP 6991)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b77b in virThreadPoolWorker () from /lib64/libvirt.so.0
---Type <return> to continue, or q <return> to quit---
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7fffe6b9a700 (LWP 6990)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b75b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7fffe739b700 (LWP 6989)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b75b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6
---Type <return> to continue, or q <return> to quit---

Thread 4 (Thread 0x7fffe7b9c700 (LWP 6988)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b75b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7fffe839d700 (LWP 6987)):
#0  0x00007ffff4da2705 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff750b2a6 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007ffff750b75b in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7fffe8b9e700 (LWP 6986)):
#0  0x00007ffff464f52c in free () from /lib64/libc.so.6
---Type <return> to continue, or q <return> to quit---
#1  0x00007ffff74b386a in virFree () from /lib64/libvirt.so.0
#2  0x00007ffff7525f3e in virDomainChrDefFree () from /lib64/libvirt.so.0
#3  0x00007ffff75344b4 in virDomainDefFree () from /lib64/libvirt.so.0
#4  0x00007fffe0b5e412 in qemuProcessStop ()
   from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#5  0x00007fffe0ba6550 in qemuDomainDestroyFlags ()
   from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#6  0x00007ffff7590ddc in virDomainDestroy () from /lib64/libvirt.so.0
#7  0x000055555558e3fc in remoteDispatchDomainDestroyHelper ()
#8  0x00007ffff7606ff2 in virNetServerProgramDispatch ()
   from /lib64/libvirt.so.0
#9  0x000055555559c1fd in virNetServerHandleJob ()
#10 0x00007ffff750b6c5 in virThreadPoolWorker () from /lib64/libvirt.so.0
#11 0x00007ffff750b05e in virThreadHelper () from /lib64/libvirt.so.0
#12 0x00007ffff4d9edf3 in start_thread () from /lib64/libpthread.so.0
#13 0x00007ffff46c505d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7ffff7fc7880 (LWP 6982)):
#0  0x00007ffff4da4f7d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007ffff4da0d41 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007ffff4da0c47 in pthread_mutex_lock () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#3  0x00007fffe0b598fc in qemuProcessHandleEvent ()
   from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#4  0x00007fffe0b7190e in qemuMonitorEmitEvent ()
   from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#5  0x00007fffe0b82f21 in qemuMonitorJSONIOProcess ()
   from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#6  0x00007fffe0b7038d in qemuMonitorIO ()
   from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#7  0x00007ffff74d09aa in virEventPollRunOnce () from /lib64/libvirt.so.0
#8  0x00007ffff74cf092 in virEventRunDefaultImpl () from /lib64/libvirt.so.0
#9  0x000055555559d6ad in virNetServerRun ()
#10 0x000055555556a548 in main ()

Comment 2 Ján Tomko 2014-11-10 15:58:46 UTC
Upstream patch:
https://www.redhat.com/archives/libvir-list/2014-November/msg00285.html

Comment 3 Ján Tomko 2014-11-11 13:56:34 UTC
Fixed upstream by:
commit b987684ff63a20ab1301c48ca4842930be044f6d
Author:     Ján Tomko <jtomko>
CommitDate: 2014-11-11 14:12:15 +0100

    Fix virDomainChrEquals for spicevmc
    
    virDomainChrSourceDefIsEqual should return 'true' for
    identical SPICEVMC chardevs, and those that have no source
    specification.
    
    After this change, a failed hotplug no longer leaves a stale
    pointer in the domain definition.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1162097

git describe: v1.2.10-76-gb987684

Comment 6 lcheng 2014-12-01 07:14:58 UTC
Reproduced this bug with libvirt-1.2.8-6.el7.x86_64.

# virsh start r7
Domain r7 started

# virsh dumpxml r7
...
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
...

# cat spicevmc.xml 
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>


# virsh attach-device r7 spicevmc.xml 
error: Failed to attach device from spicevmc.xml
error: invalid argument: device not present in domain configuration

# virsh destroy r7
error: Failed to destroy domain r7
error: End of file while reading data: Input/output error
error: Failed to reconnect to the hypervisor


=============================================

Verify it as follows. libvirt don't core dump.


Version:
libvirt-1.2.8-9.el7.x86_64
qemu-kvm-rhev-2.1.2-13.el7.x86_64
qemu-kvm-1.5.3-82.el7.x86_64


Steps:
# cat spicevmc.xml 
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>

# virsh start r7
Domain r7 started

# virsh attach-device r7 spicevmc.xml 
error: Failed to attach device from spicevmc.xml
error: Requested operation is not valid: chardev already exists

# virsh destroy r7
Domain r7 destroyed


Additional info:
For NULL and VC type chardevs, test results are the same as SPICEVMC chardevs.

Comment 8 errata-xmlrpc 2015-03-05 07:47:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.