Bug 892079 - Libvirtd crash when destroyed the windows guest which was excuting s3/s4 operation
Summary: Libvirtd crash when destroyed the windows guest which was excuting s3/s4 oper...
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.4
Hardware: x86_64
OS: Linux
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
Keywords: TestBlocker, ZStream
: 915653 (view as bug list)
Depends On: 890648 1080376
Blocks: 896690 915344
TreeView+ depends on / blocked
Reported: 2013-01-05 05:28 UTC by zhenfeng wang
Modified: 2014-03-25 09:59 UTC (History)
13 users (show)

Previously, libvirtd was unable to execute an s3/s4 operation for a Microsoft Windows guest which ran the guest agent service. Consequently, this resulted in a "domain s4 fail" error message, due to the domain being destroyed. With this update, the guest is destroyed successfully and the libvirtd service no longer crashes.
Clone Of: 890648
Last Closed: 2013-11-21 08:36:27 UTC

Attachments (Terms of Use)
libvirtd crash log (64.06 KB, text/plain)
2013-01-18 10:32 UTC, zhpeng
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1581 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2013-11-21 01:11:35 UTC

Comment 2 zhenfeng wang 2013-01-05 06:44:05 UTC
after step 5 ,i did some further check
# service libvirtd status
libvirtd dead but pid file exists

# virsh list
error: Failed to reconnect to the hypervisor
error: no valid connection
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused

# ps aux|grep qemu
root     59813  0.0  0.0 103244   856 pts/5    S+   01:29   0:00 grep qemu

start the libvirtd service ,then check the guest's status,the guest has been destroyed

# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
# service libvirtd status
libvirtd (pid  60193) is running...

# virsh list  --all
 Id    Name                           State
 -     win7-32                        shut of

Comment 5 Michal Privoznik 2013-01-09 14:35:32 UTC
Patch proposed upstream:


Comment 9 zhpeng 2013-01-18 10:32:38 UTC
Created attachment 682311 [details]
libvirtd crash log

Comment 10 zhpeng 2013-01-18 10:33:02 UTC
crash log attached.

Comment 11 EricLee 2013-01-18 13:18:32 UTC
I can also still reproduce this bug with libvirt-0.10.2-16.el6.x86_64.

Comment 12 Michal Privoznik 2013-01-21 19:07:38 UTC
Okay guys, I've created a scratch build before claiming I fixed this:


Can you please give it a try?

Comment 14 Michal Privoznik 2013-01-22 09:55:04 UTC
Patch proposed upstream:


Comment 16 Michal Privoznik 2013-01-23 14:37:43 UTC
Since this is targeted for 6.5 now, and I've  just pushed the patch upstream, I am moving this one to POST:

commit d960d06fc06a448f495c465caf06d3d0c74ea587
Author:     Michal Privoznik <mprivozn@redhat.com>
AuthorDate: Mon Jan 21 11:52:44 2013 +0100
Commit:     Michal Privoznik <mprivozn@redhat.com>
CommitDate: Wed Jan 23 15:35:44 2013 +0100

    qemu_agent: Ignore expected EOFs
    One of my previous patches (f2a4e5f176c408) tried to fix crashing
    libvirtd on domain detroy. However, we need to copy pattern from
    qemuProcessHandleMonitorEOF() instead of decrementing reference
    counter. The rationale for this is, if qemu process is dying due
    to domain being destroyed, we obtain EOF on both the monitor and
    agent sockets. However, if the exit is expected, qemuProcessStop
    is called, which cleans both agent and monitor sockets up. We
    want qemuAgentClose() to be called iff the EOF is not expected,
    so we don't leak an FD and memory. Moreover, there could be race
    with qemuProcessHandleMonitorEOF() which could have already
    closed the agent socket, in which case we don't want to do


Comment 18 Jiri Denemark 2013-02-26 13:33:15 UTC
*** Bug 915653 has been marked as a duplicate of this bug. ***

Comment 19 Jakub Libosvar 2013-03-13 09:15:50 UTC
Marking with TestBlocker since it fails our Jenkins Jobs testing RHEV

Comment 20 Dave Allan 2013-03-13 14:34:13 UTC
(In reply to comment #19)
> Marking with TestBlocker since it fails our Jenkins Jobs testing RHEV

Fair enough, thank you for the explanation of what's failing.

Comment 22 zhenfeng wang 2013-08-13 10:15:50 UTC
Verify this bug on libvirt-0.10.2-21.el6.x86_64, the following was my verification steps

pkg info
1. prepare the test environment as step1 and step2 in coment 0
2 Excute the s3 in the host, quit the command before it was finshed
#  virsh dompmsuspend win7 --target mem

3 Excute the s4 in the host ,quit the command  before it was finished
# virsh dompmsuspend win7 --target disk

4 Then destroy the guest in the host
# virsh destroy win7
Domain win7 destroyed

5.check the libvirtd status
# ps aux|grep libvirtd
root      5251  0.0  0.0 103244   836 pts/0    S+   18:13   0:00 grep libvirtd
root     30067  1.7  0.1 1027604 15896 ?       Sl   16:18   1:58 libvirtd --daemon
# service libvirtd status
libvirtd (pid  30067) is running...

since the libvirtd was not crashed, also i can reproduce this bug in libvirt-0.10.2-13.el6.x86_64, so mark this bug verified

Comment 24 errata-xmlrpc 2013-11-21 08:36:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.