RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1350688 - libvirtd crashes after qemu-attach in qemuDomainPerfRestart()
Summary: libvirtd crashes after qemu-attach in qemuDomainPerfRestart()
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-28 06:51 UTC by Jingjing Shao
Modified: 2016-11-03 18:47 UTC (History)
7 users (show)

Fixed In Version: libvirt-2.0.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 18:47:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Jingjing Shao 2016-06-28 06:51:33 UTC
Description of problem:
The libvirtd service will be crash after qemu-attach domain PID

Version-Release number of selected component (if applicable):
libvirt-1.3.5-1.el7.x86_64
qemu-kvm-rhev-2.6.0-9.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1.  make sure the libvirtd service is running
[root@hp-dl385g8-02 images]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2016-06-28 02:00:29 EDT; 16s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 3335 (libvirtd)


2.# ll /var/lib/libvirt/images/test.img
-rw-r--r--. 1 root root 1647771648 Jun 28 01:51 /var/lib/libvirt/images/test.img

3.
[root@hp-dl385g8-02 tmp]# /usr/libexec/qemu-kvm -m 1500 -drive file=/var/lib/libvirt/images/test.img,index=0 -monitor unix:/tmp/ss,server,nowait -name test -uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f &
[1] 2960
[root@hp-dl385g8-02 tmp]# VNC server running on '::1;5900'

4.[root@hp-dl385g8-02 images]# ps -ef | grep qemu
root      2960 11242 99 01:44 pts/0    00:00:12 /usr/libexec/qemu-kvm -m 1500 -drive file=/var/lib/libvirt/images/test.img,index=0 -monitor unix:/tmp/ss,server,nowait -name test -uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f

5.[root@hp-dl385g8-02 images]# virsh qemu-attach  2960
error: Disconnected from qemu:///system due to I/O error
error: Failed to attach to pid 2960
error: End of file while reading data: Input/output error

6.[root@hp-dl385g8-02 images]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2016-06-28 02:01:03 EDT; 17s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 3433 (libvirtd)  


Actual results:
As subject

Expected results:
The domain can be attached successfully, and the libvirtd service should not crash

Additional info:
Thread 17 (Thread 0x7f56e06d6700 (LWP 3684)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 16 (Thread 0x7f56dfed5700 (LWP 3685)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 15 (Thread 0x7f56df6d4700 (LWP 3686)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 14 (Thread 0x7f56deed3700 (LWP 3687)):
#0  0x00007f56d40c9042 in qemuDomainPerfRestart () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#1  0x00007f56d40ce132 in qemuProcessAttach () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#2  0x00007f56d41119f4 in qemuDomainQemuAttach () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#3  0x00007f56f00501ac in virDomainQemuAttach () from /lib64/libvirt-qemu.so.0
#4  0x00007f56f08db7b9 in qemuDispatchDomainAttachHelper ()
#5  0x00007f56efccdef2 in virNetServerProgramDispatch () from /lib64/libvirt.so.0
#6  0x00007f56f08eae0d in virNetServerHandleJob ()
#7  0x00007f56efbbc581 in virThreadPoolWorker () from /lib64/libvirt.so.0
#8  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#9  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 13 (Thread 0x7f56de6d2700 (LWP 3688)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 12 (Thread 0x7f56dded1700 (LWP 3689)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
---Type <return> to continue, or q <return> to quit---
#2  0x00007f56efbbc5cb in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 11 (Thread 0x7f56dd6d0700 (LWP 3690)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc5cb in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7f56dcecf700 (LWP 3691)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc5cb in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f56d7fff700 (LWP 3692)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc5cb in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7f56d77fe700 (LWP 3693)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc5cb in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f56cd58e700 (LWP 3694)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f56ccd8d700 (LWP 3695)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
---Type <return> to continue, or q <return> to quit---
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f56ada7e700 (LWP 3696)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f56ad27d700 (LWP 3697)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f56aca7c700 (LWP 3698)):
#0  0x00007f56ed1e26d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f56efbbbb76 in virCondWait () from /lib64/libvirt.so.0
#2  0x00007f56efbbc633 in virThreadPoolWorker () from /lib64/libvirt.so.0
#3  0x00007f56efbbb908 in virThreadHelper () from /lib64/libvirt.so.0
#4  0x00007f56ed1dedc5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f56ecf051cd in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f56f082a8c0 (LWP 3683)):
#0  0x00007f56ecefab7d in poll () from /lib64/libc.so.6
#1  0x00007f56efb75bb6 in virEventPollRunOnce () from /lib64/libvirt.so.0
#2  0x00007f56efb74692 in virEventRunDefaultImpl () from /lib64/libvirt.so.0
#3  0x00007f56efcc7fdd in virNetDaemonRun () from /lib64/libvirt.so.0
#4  0x00007f56f08b1d02 in main ()

Comment 1 Peter Krempa 2016-06-28 07:05:06 UTC
Perf event state is not initialized in qemuAttach

Comment 3 Peter Krempa 2016-06-30 13:11:29 UTC
Fixed upstream:

commit d7c40d50d721f5e34522efc57c4c4537721602c2
Author: Peter Krempa <pkrempa>
Date:   Tue Jun 28 14:37:29 2016 +0200

    conf: def: Avoid unnecessary allocation of 'perf' events definition
    
    Some code paths already assume that it is allocated since it was always
    allocated by virDomainPerfDefParseXML. Make it member of virDomainDef
    directly so that we don't have to allocate it all the time.
    
    This fixes crash when attempting to connect to an existing process via
    virDomainQemuAttach since we would not allocate it in that code path.

Comment 5 yalzhang@redhat.com 2016-07-04 07:16:49 UTC
test on libvirt-2.0.0-1.el7.x86_64, the result is as expected, move the bug to verified.

# /usr/libexec/qemu-kvm -m 1024 -drive file=/var/lib/libvirt/images/rhel7.2.qcow2,index=0 -monitor unix:/tmp/ff,server,nowait -name r7 --uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f &
[3] 11634
# warning: host doesn't support requested feature: CPUID.80000001H:ECX.abm [bit 5]
warning: host doesn't support requested feature: CPUID.80000001H:ECX.sse4a [bit 6]
VNC server running on '::1;5900'

# ps -ef | grep r7 | grep -v grep
root     11634  6721 99 14:59 pts/1    00:00:17 /usr/libexec/qemu-kvm -m 1024 -drive file=/var/lib/libvirt/images/rhel7.2.qcow2,index=0 -monitor unix:/tmp/ff,server,nowait -name r7 --uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f

# virsh qemu-attach 11634
Domain r7 attached to pid 11634

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 3     r7                             running

# ps -ef | grep r7 | grep -v grep
root     11634  6721 83 14:59 pts/1    00:00:49 /usr/libexec/qemu-kvm -m 1024 -drive file=/var/lib/libvirt/images/rhel7.2.qcow2,index=0 -monitor unix:/tmp/ff,server,nowait -name r7 --uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f

Comment 6 yalzhang@redhat.com 2016-09-22 07:57:10 UTC
Hi peter,

Even this 'qemu-attach' command is discouraged to use (stated in the man page), there is one change make me confused. Would you please help to confirm?
Test on libvirt-2.0.0-10.el7.x86_64

Steps to Reproduce:
1. # qemu-img create /var/lib/libvirt/images/foo.img 1G

2. # /usr/libexec/qemu-kvm -cdrom /var/lib/libvirt/images/foo.img -monitor unix:/tmp/demo,server,nowait -name foo -uuid cece4f9f-dff0-575d-0e8e-01fe380f12ea &
[1] 7044
  # VNC server running on '::1;5900'

3.qemu-attach failed
# virsh qemu-attach 7044
error: Failed to attach to pid 7044
error: Operation not supported: JSON monitor is required
# virsh list --all
 Id    Name                           State
----------------------------------------------------

===> not connected

4. Run the qemu-attach command the 2nd time, it will hang. On another terminal, you will found the vm is already connected with "shut off" state
# virsh qemu-attach 7044  

===> it will hang
# virsh list
 Id    Name                           State
----------------------------------------------------
 4     foo                            shut off
# ps -aux | grep qemu-kvm
root      7044 18.4  0.5 591372 39536 pts/1    Sl   12:34   0:12 /usr/libexec/qemu-kvm -cdrom /var/lib/libvirt/images/foo.img -monitor unix:/tmp/demo,server,nowait -name foo -uuid cece4f9f-dff0-575d-0e8e-01fe380f12ea

5. Run the qemu-attach the 3rd time, it says the domain is already active
# virsh qemu-attach 7044
error: Failed to attach to pid 7044
error: Requested operation is not valid: domain 'foo' is already active

Actual results:
qemu-attach will fail with 'Json monitor' required.

Expected results:
The qemu-attach command should work well.

Additional info:
Downgrade to libvirt-2.0.0-6.el7.x86_64 on the same host, it works well.
# virsh qemu-attach 6636
Domain foo attached to pid 6636

I have no idea about 'JSON monitor'. If I should install some packages like "Json monitor"?
Thank you in advance.

Comment 7 Peter Krempa 2016-09-22 10:42:06 UTC
Please file a new bug with the findings.

I'm not sure what caused us to require the JSON monitor in this case, but the VM should not stay there in case of the second attach attempt.

Comment 8 yalzhang@redhat.com 2016-09-22 10:56:56 UTC
Hi peter,

Thank you, file a bug 1378401

Comment 10 errata-xmlrpc 2016-11-03 18:47:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.