Bug 750773 - qemu-kvm hang while booting a win7 32 bit VM with the qxl and virtio-serial drivers installed
Summary: qemu-kvm hang while booting a win7 32 bit VM with the qxl and virtio-serial d...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: qemu
Version: 16
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Fedora Virtualization Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-11-02 11:04 UTC by Christophe Fergeau
Modified: 2013-02-11 22:27 UTC (History)
26 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-11 22:27:58 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
NMI Crash Dump (17.49 MB, application/x-gzip)
2012-02-22 13:30 UTC, Emanuel Rietveld
no flags Details

Description Christophe Fergeau 2011-11-02 11:04:01 UTC
Description of problem:




Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a win7 VM
2. Install the qxl drivers (I've also installed the spice agent and the needed virtio drivers)
3. Restart the VM
  
Actual results:
The VM hangs during boot showing the windows logo on a black background (I think it hangs when it's supposed to switch to the login step)

Expected results:
The VM starts up

Additional info:
The commandline I'm using is 
qemu-kvm -L /usr/share/seabios/  -bios /usr/share/seabios/bios.bin -spice port=9001,disable-ticketing -enable-kvm -m 1G -usbdevice tablet  -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -chardev spicevmc,name=vdagent,id=vdagent -device virtserialport,nr=1,bus=virtio-serial0.0,chardev=vdagent,name=com.redhat.spice.0 -drive file=/home/teuf/vm-pool/731628/win7.qcow2  -vga qxl

Starting in vga mode works. I built qemu-0.15.1 (as opposed to qemu-kvm) and it's working fine too. Reverting http://git.kiszka.org/qemu-kvm.git/?p=qemu-kvm.git;a=commitdiff;h=a16c53b101a9897b0b2be96a1bb3bde7c04380f2 fixes the hang for me (I ran a bisect on this git tree), but I'm not really why this helps...

I can investigate this some more if you can provide some guidance :)

Comment 1 Christophe Fergeau 2011-11-11 21:03:32 UTC
Actually I made some more tests, and it's also related to --enable-io-thread. it changed to being on by default in qemu (from qemu.org) master, and I get the hang there, and if I pick the 0.15 branch and compile with --enable-io-thread, I get the hang too, and it disappears as soon as I disable the io thread.

Comment 2 Christophe Fergeau 2011-11-21 11:20:34 UTC
A few more notes, upgrading to the latest versions of the qxl driver and of the virtio-serial driver is not enough. Removing virtio from the command line but keeping qxl avoids the freeze too.

The agent has nothing to do with this freeze since it happens with this cmdline too:
qemu-kvm -L /usr/share/seabios/  -bios /usr/share/seabios/bios.bin -spice
port=9001,disable-ticketing -enable-kvm -m 1G -usbdevice tablet  -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 
-drive file=/home/teuf/vm-pool/731628/win7.qcow2  -vga qxl

Comment 3 Alon Levy 2011-11-21 14:55:14 UTC
This seems to be the same issue described on the mailing list:

 http://lists.freedesktop.org/archives/spice-devel/2011-November/006167.html

Alon

Comment 4 Alon Levy 2011-11-23 10:59:12 UTC
Some notes, still no conclusion.

IO thread is always enabled since:
 commit 12d4536f7d911b6d87a766ad7300482ea663cea2
 Author: Anthony Liguori <aliguori.com>
 Date:   Mon Aug 22 08:24:58 2011 -0500

     main: force enabling of I/O thread

Problem seems to be related to virtio-serial initialization. Can be observed with the trace events system in qemu:
build qemu with --enable-trace-backend=stderr
and add to the qemu command line:
-trace events=events

create a file named events in the local directory with:
virtio_queue_notify
virtio_irq
virtio_notify
virtio_set_status
virtio_serial_send_control_event
virtio_serial_throttle_port
virtio_serial_handle_control_message
virtio_serial_handle_control_message_port
virtio_console_flush_buf
virtio_console_chr_read
virtio_console_chr_event
virtio_blk_req_complete
virtio_blk_rw_complete
virtio_blk_handle_write

NB: The event list is in the trace-events file in qemu git

Comment 5 Zeeshan Ali 2011-12-02 03:11:23 UTC
Interesting that I can reproduce this issue already at login time. Currently we have a work-around in Boxes to use 'vga' instead of 'qxl' when installing win7.

Comment 6 Zeeshan Ali 2011-12-02 03:12:38 UTC
(In reply to comment #5)
> Interesting that I can reproduce this issue already at login time.

Err.. I meant 'installation' not 'login time'.

Comment 7 Zeeshan Ali 2011-12-05 01:45:23 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > Interesting that I can reproduce this issue already at login time.
> 
> Err.. I meant 'installation' not 'login time'.

Don't know of the actual report (yet) but I can already verify that I can't reproduce this issue at installation any more with latest upstream Qemu (1.0) release.

Comment 8 Christophe Fergeau 2011-12-05 12:17:46 UTC
It seems to no longer happen here anymore, but it seems that I can't reproduce it on a qemu version that used to have the bug... Ie I don't think it's directly related to qemu 1.0

Comment 9 nicolas 2012-02-18 16:44:18 UTC
This bug reappear for me : 
with qemu 1.0 , 1.0.1 , last from git.
last virtioserial from git
last qxl from spice-space.org/download/

starting more than one machine ( 10 for example ) , 
vm guest windows seven, xp 32 or 64. 
3,4 are freezing at windows boot.

How reproduce  :  -vga qxl -device virtio 
( no spice agent in vm, no -spice in qemu cmd line ) .

What is the interaction between qxl et virtio without vdagent/vdservice ?

Comment 10 nicolas 2012-02-21 06:33:45 UTC
trace event of virtio when no problem : 
( after this dump, an other dump when vm is freezing )  

 /usr/local/bin/qemu -name TEST_RW020 -readconfig /etc/ich9-ehci-uhci.cfg -device usb-tablet  -spice port=11963,disable-ticketing  -vga qxl   -device virtio-serial -chardev spicevmc,id=vdagent,debug=0,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0  -chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1,debug=0 -chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2,debug=0 -chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3,debug=0  -vnc 10.98.98.1:133 -monitor tcp:127.0.0.1:10133,server,nowait,nodelay  -soundhw ac97 -m 512 -pidfile /var/run/qemu/TEST_RW020.pid -k fr -net tap,vlan=5,name=externe,script=/etc/qemu-ifEup,downscript=/etc/qemu-ifEdown,ifname=vmEtap33 -net nic,vlan=5,macaddr=ac:de:49:17:cb:81,model=e1000 -drive file=/swapfile-guest/swap1,if=ide,index=1,media=disk,snapshot=on  -rtc base=localtime -no-hpet -cpu host -drive file=/mnt/vdisk/images/VM-TEST_RW020.1329765842.018151,index=0,media=disk,snapshot=on,cache=unsafe  -fda fat:floppy:/mnt/vdisk/diskconf/TEST_RW020

*** EHCI support is under development ***
virtio_serial_send_control_event port 1, event 1, value 1
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 1
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_set_status vdev 0x7fc6e9ca57f0 val 5
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 63 vq 0x7fc6e9ca6cd0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6cd0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 62 vq 0x7fc6e9ca6c80
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 61 vq 0x7fc6e9ca6c30
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6c30
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 60 vq 0x7fc6e9ca6be0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 59 vq 0x7fc6e9ca6b90
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6b90
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 58 vq 0x7fc6e9ca6b40
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 57 vq 0x7fc6e9ca6af0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6af0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 56 vq 0x7fc6e9ca6aa0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 55 vq 0x7fc6e9ca6a50
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6a50
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 54 vq 0x7fc6e9ca6a00
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 53 vq 0x7fc6e9ca69b0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca69b0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 52 vq 0x7fc6e9ca6960
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 51 vq 0x7fc6e9ca6910
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6910
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 50 vq 0x7fc6e9ca68c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 49 vq 0x7fc6e9ca6870
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6870
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 48 vq 0x7fc6e9ca6820
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 47 vq 0x7fc6e9ca67d0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca67d0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 46 vq 0x7fc6e9ca6780
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 45 vq 0x7fc6e9ca6730
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6730
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 44 vq 0x7fc6e9ca66e0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 43 vq 0x7fc6e9ca6690
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6690
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 42 vq 0x7fc6e9ca6640
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 41 vq 0x7fc6e9ca65f0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca65f0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 40 vq 0x7fc6e9ca65a0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 39 vq 0x7fc6e9ca6550
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6550
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 38 vq 0x7fc6e9ca6500
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 37 vq 0x7fc6e9ca64b0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca64b0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 36 vq 0x7fc6e9ca6460
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 35 vq 0x7fc6e9ca6410
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6410
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 34 vq 0x7fc6e9ca63c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 33 vq 0x7fc6e9ca6370
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6370
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 32 vq 0x7fc6e9ca6320
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 31 vq 0x7fc6e9ca62d0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca62d0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 30 vq 0x7fc6e9ca6280
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 29 vq 0x7fc6e9ca6230
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6230
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 28 vq 0x7fc6e9ca61e0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 27 vq 0x7fc6e9ca6190
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6190
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 26 vq 0x7fc6e9ca6140
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 25 vq 0x7fc6e9ca60f0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca60f0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 24 vq 0x7fc6e9ca60a0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 23 vq 0x7fc6e9ca6050
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6050
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 22 vq 0x7fc6e9ca6000
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 21 vq 0x7fc6e9ca5fb0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5fb0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 20 vq 0x7fc6e9ca5f60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 19 vq 0x7fc6e9ca5f10
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5f10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 18 vq 0x7fc6e9ca5ec0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 17 vq 0x7fc6e9ca5e70
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5e70
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 16 vq 0x7fc6e9ca5e20
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 15 vq 0x7fc6e9ca5dd0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5dd0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 14 vq 0x7fc6e9ca5d80
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 13 vq 0x7fc6e9ca5d30
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5d30
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 12 vq 0x7fc6e9ca5ce0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 11 vq 0x7fc6e9ca5c90
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5c90
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 10 vq 0x7fc6e9ca5c40
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 9 vq 0x7fc6e9ca5bf0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5bf0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 8 vq 0x7fc6e9ca5ba0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 7 vq 0x7fc6e9ca5b50
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5b50
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 6 vq 0x7fc6e9ca5b00
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 5 vq 0x7fc6e9ca5ab0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 0, value 1
virtio_serial_send_control_event port 1, event 1, value 1
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 1 vq 0x7fc6e9ca5970
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5970
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 0 vq 0x7fc6e9ca5920
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 5 vq 0x7fc6e9ca5ab0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 3, value 1
virtio_serial_handle_control_message_port port 1
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 6, value 1
virtio_serial_handle_control_message_port port 1
virtio_console_chr_event port 1, event 2
virtio_serial_send_control_event port 1, event 6, value 1
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 5 vq 0x7fc6e9ca5ab0
virtio_console_flush_buf port 1, in_len 36, out_len 36
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 6, value 0
virtio_serial_handle_control_message_port port 1
virtio_console_chr_event port 1, event 5
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0
virtio_serial_send_control_event port 1, event 6, value 0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_set_status vdev 0x7fc6e9ca57f0 val 1
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 0
virtio_set_status vdev 0x7fc6e9ca57f0 val 1
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_set_status vdev 0x7fc6e9ca57f0 val 5
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 63 vq 0x7fc6e9ca6cd0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6cd0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 62 vq 0x7fc6e9ca6c80
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 61 vq 0x7fc6e9ca6c30
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6c30
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 60 vq 0x7fc6e9ca6be0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 59 vq 0x7fc6e9ca6b90
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6b90
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 58 vq 0x7fc6e9ca6b40
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 57 vq 0x7fc6e9ca6af0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6af0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 56 vq 0x7fc6e9ca6aa0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 55 vq 0x7fc6e9ca6a50
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6a50
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 54 vq 0x7fc6e9ca6a00
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 53 vq 0x7fc6e9ca69b0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca69b0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 52 vq 0x7fc6e9ca6960
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 51 vq 0x7fc6e9ca6910
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6910
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 50 vq 0x7fc6e9ca68c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 49 vq 0x7fc6e9ca6870
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6870
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 48 vq 0x7fc6e9ca6820
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 47 vq 0x7fc6e9ca67d0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca67d0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 46 vq 0x7fc6e9ca6780
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 45 vq 0x7fc6e9ca6730
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6730
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 44 vq 0x7fc6e9ca66e0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 43 vq 0x7fc6e9ca6690
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6690
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 42 vq 0x7fc6e9ca6640
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 41 vq 0x7fc6e9ca65f0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca65f0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 40 vq 0x7fc6e9ca65a0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 39 vq 0x7fc6e9ca6550
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6550
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 38 vq 0x7fc6e9ca6500
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 37 vq 0x7fc6e9ca64b0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca64b0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 36 vq 0x7fc6e9ca6460
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 35 vq 0x7fc6e9ca6410
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6410
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 34 vq 0x7fc6e9ca63c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 33 vq 0x7fc6e9ca6370
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6370
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 32 vq 0x7fc6e9ca6320
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 31 vq 0x7fc6e9ca62d0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca62d0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 30 vq 0x7fc6e9ca6280
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 29 vq 0x7fc6e9ca6230
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6230
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 28 vq 0x7fc6e9ca61e0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 27 vq 0x7fc6e9ca6190
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6190
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 26 vq 0x7fc6e9ca6140
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 25 vq 0x7fc6e9ca60f0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca60f0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 24 vq 0x7fc6e9ca60a0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 23 vq 0x7fc6e9ca6050
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca6050
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 22 vq 0x7fc6e9ca6000
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 21 vq 0x7fc6e9ca5fb0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5fb0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 20 vq 0x7fc6e9ca5f60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 19 vq 0x7fc6e9ca5f10
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5f10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 18 vq 0x7fc6e9ca5ec0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 17 vq 0x7fc6e9ca5e70
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5e70
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 16 vq 0x7fc6e9ca5e20
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 15 vq 0x7fc6e9ca5dd0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5dd0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 14 vq 0x7fc6e9ca5d80
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 13 vq 0x7fc6e9ca5d30
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5d30
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 12 vq 0x7fc6e9ca5ce0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 11 vq 0x7fc6e9ca5c90
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5c90
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 10 vq 0x7fc6e9ca5c40
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 9 vq 0x7fc6e9ca5bf0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5bf0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 8 vq 0x7fc6e9ca5ba0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 7 vq 0x7fc6e9ca5b50
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5b50
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 6 vq 0x7fc6e9ca5b00
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 5 vq 0x7fc6e9ca5ab0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 1 vq 0x7fc6e9ca5970
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5970
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 0 vq 0x7fc6e9ca5920
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 0, value 1
virtio_serial_send_control_event port 1, event 1, value 1
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 5 vq 0x7fc6e9ca5ab0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 4 vq 0x7fc6e9ca5a60
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 3, value 1
virtio_serial_handle_control_message_port port 1
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 3 vq 0x7fc6e9ca5a10
virtio_serial_handle_control_message event 6, value 1
virtio_serial_handle_control_message_port port 1
virtio_console_chr_event port 1, event 2
virtio_serial_send_control_event port 1, event 6, value 1
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca59c0
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5a10
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 2 vq 0x7fc6e9ca59c0
virtio_queue_notify vdev 0x7fc6e9ca57f0 n 5 vq 0x7fc6e9ca5ab0
virtio_console_flush_buf port 1, in_len 36, out_len 36
virtio_notify vdev 0x7fc6e9ca57f0 vq 0x7fc6e9ca5ab0


########################################################################
/usr/local/bin/qemu -name TEST_RW025 -readconfig /etc/ich9-ehci-uhci.cfg -device usb-tablet  -spice port=11968,disable-ticketing  -vga qxl   -device virtio-serial -chardev spicevmc,id=vdagent,debug=0,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0  -chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1,debug=0 -chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2,debug=0 -chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3,debug=0  -vnc 10.98.98.1:138 -monitor tcp:127.0.0.1:10138,server,nowait,nodelay  -soundhw ac97 -m 512 -pidfile /var/run/qemu/TEST_RW025.pid -k fr -net tap,vlan=5,name=externe,script=/etc/qemu-ifEup,downscript=/etc/qemu-ifEdown,ifname=vmEtap38 -net nic,vlan=5,macaddr=ac:de:49:0b:34:09,model=e1000 -drive file=/swapfile-guest/swap1,if=ide,index=1,media=disk,snapshot=on  -rtc base=localtime -no-hpet -cpu host -drive file=/mnt/vdisk/images/VM-TEST_RW025.1329765843.2452409,index=0,media=disk,snapshot=on,cache=unsafe  -fda fat:floppy:/mnt/vdisk/diskconf/TEST_RW025 

*** EHCI support is under development ***
virtio_serial_send_control_event port 1, event 1, value 1
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 0
virtio_set_status vdev 0x7f29e7c17a20 val 1
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_set_status vdev 0x7f29e7c17a20 val 5
virtio_queue_notify vdev 0x7f29e7c17a20 n 63 vq 0x7f29e7c18f00
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18f00
virtio_queue_notify vdev 0x7f29e7c17a20 n 62 vq 0x7f29e7c18eb0
virtio_queue_notify vdev 0x7f29e7c17a20 n 61 vq 0x7f29e7c18e60
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18e60
virtio_queue_notify vdev 0x7f29e7c17a20 n 60 vq 0x7f29e7c18e10
virtio_queue_notify vdev 0x7f29e7c17a20 n 59 vq 0x7f29e7c18dc0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18dc0
virtio_queue_notify vdev 0x7f29e7c17a20 n 58 vq 0x7f29e7c18d70
virtio_queue_notify vdev 0x7f29e7c17a20 n 57 vq 0x7f29e7c18d20
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18d20
virtio_queue_notify vdev 0x7f29e7c17a20 n 56 vq 0x7f29e7c18cd0
virtio_queue_notify vdev 0x7f29e7c17a20 n 55 vq 0x7f29e7c18c80
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18c80
virtio_queue_notify vdev 0x7f29e7c17a20 n 54 vq 0x7f29e7c18c30
virtio_queue_notify vdev 0x7f29e7c17a20 n 53 vq 0x7f29e7c18be0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18be0
virtio_queue_notify vdev 0x7f29e7c17a20 n 52 vq 0x7f29e7c18b90
virtio_queue_notify vdev 0x7f29e7c17a20 n 51 vq 0x7f29e7c18b40
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18b40
virtio_queue_notify vdev 0x7f29e7c17a20 n 50 vq 0x7f29e7c18af0
virtio_queue_notify vdev 0x7f29e7c17a20 n 49 vq 0x7f29e7c18aa0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18aa0
virtio_queue_notify vdev 0x7f29e7c17a20 n 48 vq 0x7f29e7c18a50
virtio_queue_notify vdev 0x7f29e7c17a20 n 47 vq 0x7f29e7c18a00
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18a00
virtio_queue_notify vdev 0x7f29e7c17a20 n 46 vq 0x7f29e7c189b0
virtio_queue_notify vdev 0x7f29e7c17a20 n 45 vq 0x7f29e7c18960
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18960
virtio_queue_notify vdev 0x7f29e7c17a20 n 44 vq 0x7f29e7c18910
virtio_queue_notify vdev 0x7f29e7c17a20 n 43 vq 0x7f29e7c188c0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c188c0
virtio_queue_notify vdev 0x7f29e7c17a20 n 42 vq 0x7f29e7c18870
virtio_queue_notify vdev 0x7f29e7c17a20 n 41 vq 0x7f29e7c18820
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18820
virtio_queue_notify vdev 0x7f29e7c17a20 n 40 vq 0x7f29e7c187d0
virtio_queue_notify vdev 0x7f29e7c17a20 n 39 vq 0x7f29e7c18780
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18780
virtio_queue_notify vdev 0x7f29e7c17a20 n 38 vq 0x7f29e7c18730
virtio_queue_notify vdev 0x7f29e7c17a20 n 37 vq 0x7f29e7c186e0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c186e0
virtio_queue_notify vdev 0x7f29e7c17a20 n 36 vq 0x7f29e7c18690
virtio_queue_notify vdev 0x7f29e7c17a20 n 35 vq 0x7f29e7c18640
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18640
virtio_queue_notify vdev 0x7f29e7c17a20 n 34 vq 0x7f29e7c185f0
virtio_queue_notify vdev 0x7f29e7c17a20 n 33 vq 0x7f29e7c185a0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c185a0
virtio_queue_notify vdev 0x7f29e7c17a20 n 32 vq 0x7f29e7c18550
virtio_queue_notify vdev 0x7f29e7c17a20 n 31 vq 0x7f29e7c18500
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18500
virtio_queue_notify vdev 0x7f29e7c17a20 n 30 vq 0x7f29e7c184b0
virtio_queue_notify vdev 0x7f29e7c17a20 n 29 vq 0x7f29e7c18460
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18460
virtio_queue_notify vdev 0x7f29e7c17a20 n 28 vq 0x7f29e7c18410
virtio_queue_notify vdev 0x7f29e7c17a20 n 27 vq 0x7f29e7c183c0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c183c0
virtio_queue_notify vdev 0x7f29e7c17a20 n 26 vq 0x7f29e7c18370
virtio_queue_notify vdev 0x7f29e7c17a20 n 25 vq 0x7f29e7c18320
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18320
virtio_queue_notify vdev 0x7f29e7c17a20 n 24 vq 0x7f29e7c182d0
virtio_queue_notify vdev 0x7f29e7c17a20 n 23 vq 0x7f29e7c18280
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18280
virtio_queue_notify vdev 0x7f29e7c17a20 n 22 vq 0x7f29e7c18230
virtio_queue_notify vdev 0x7f29e7c17a20 n 21 vq 0x7f29e7c181e0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c181e0
virtio_queue_notify vdev 0x7f29e7c17a20 n 20 vq 0x7f29e7c18190
virtio_queue_notify vdev 0x7f29e7c17a20 n 19 vq 0x7f29e7c18140
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18140
virtio_queue_notify vdev 0x7f29e7c17a20 n 18 vq 0x7f29e7c180f0
virtio_queue_notify vdev 0x7f29e7c17a20 n 17 vq 0x7f29e7c180a0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c180a0
virtio_queue_notify vdev 0x7f29e7c17a20 n 16 vq 0x7f29e7c18050
virtio_queue_notify vdev 0x7f29e7c17a20 n 15 vq 0x7f29e7c18000
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c18000
virtio_queue_notify vdev 0x7f29e7c17a20 n 14 vq 0x7f29e7c17fb0
virtio_queue_notify vdev 0x7f29e7c17a20 n 13 vq 0x7f29e7c17f60
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17f60
virtio_queue_notify vdev 0x7f29e7c17a20 n 12 vq 0x7f29e7c17f10
virtio_queue_notify vdev 0x7f29e7c17a20 n 11 vq 0x7f29e7c17ec0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17ec0
virtio_queue_notify vdev 0x7f29e7c17a20 n 10 vq 0x7f29e7c17e70
virtio_queue_notify vdev 0x7f29e7c17a20 n 9 vq 0x7f29e7c17e20
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17e20
virtio_queue_notify vdev 0x7f29e7c17a20 n 8 vq 0x7f29e7c17dd0
virtio_queue_notify vdev 0x7f29e7c17a20 n 7 vq 0x7f29e7c17d80
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17d80
virtio_queue_notify vdev 0x7f29e7c17a20 n 6 vq 0x7f29e7c17d30
virtio_queue_notify vdev 0x7f29e7c17a20 n 5 vq 0x7f29e7c17ce0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17ce0
virtio_queue_notify vdev 0x7f29e7c17a20 n 4 vq 0x7f29e7c17c90
virtio_queue_notify vdev 0x7f29e7c17a20 n 3 vq 0x7f29e7c17c40
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17c40
virtio_queue_notify vdev 0x7f29e7c17a20 n 2 vq 0x7f29e7c17bf0
virtio_queue_notify vdev 0x7f29e7c17a20 n 1 vq 0x7f29e7c17ba0
virtio_notify vdev 0x7f29e7c17a20 vq 0x7f29e7c17ba0
virtio_queue_notify vdev 0x7f29e7c17a20 n 0 vq 0x7f29e7c17b50

Comment 11 Emanuel Rietveld 2012-02-21 12:55:35 UTC
I can reproduce on a standard F16 install, up to date as of today, using virt-manager. Windows logo animates, then freezes and nothing happens. This happens regardless of whether I connect a viewer or not. 

The issue happens if Video is set to QXL and the spice channel is available.

Setting Video to Cirrus makes the issue go away, even if spice channel is available. 

Switching display type from spice to VNC and choosing to remove the spice channels when the dialog pops up, makes the issue go away, even with video set to QXL. Switching back to spice and choosing NOT to add the spice channels when the dialog pops up, does not make the issue reappear. The issue reappears when I choose to add the spice channels.

In the guest, I'm using the latest drivers for serial, network, storage from 
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/
which at the time of this comment is  virtio-win-0.1-22.iso

I have the spice agent and QXL drivers installed, from
http://spice-space.org/download/binaries/qxl-win-0.1012-20111107-ff93ec988c.zip
http://spice-space.org/download/binaries/vdagent-win32_20111124.zip

The issue is 100% reproducible. I have installed a new windows 32 bit VM and has the same issue.

Please let me know if there is any testing I can help out with.

Comment 12 Emanuel Rietveld 2012-02-21 13:33:47 UTC
I can have spice agent working if I uninstall QXL drivers then set video to QXL. Windows now reports the "Standard VGA Driver" for the QXL device, no issues during boot, despite spice channels available and working, and video set to QXL.

Comment 13 Vadim Rozenfeld 2012-02-21 21:09:21 UTC
(In reply to comment #11)
> Please let me know if there is any testing I can help out with.

Could you please enable NMICrashDump on your system 
(http://support.microsoft.com/kb/927069), generate crash dump when
system is hung, and upload it as attachment?

TIA,
Vadim.

Comment 14 Emanuel Rietveld 2012-02-22 13:30:07 UTC
Created attachment 564965 [details]
NMI Crash Dump

Awesome, I didn't know you could do such cool stuff.

I created a DWORD value NMICrashDump = 1 in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl as described in the article you linked, rebooted windows, and then tested by executing "virsh inject-nmi <name>". Although I did get the blue screen saying it was making a crash dump, it did not actually make a crash dump in C:\Windows\MEMORY.DMP until I successfully booted the computer again. Another thing that confused me at first is once you close the dialog box "Windows has recovered from an unexpected shutdown", the memory dump is deleted.

With this in mind, I went into device manager, right-clicked "Standard VGA Adapter" and selected "Update driver", navigated to the folder where I had downloaded the QXL driver from http://spice-space.org/download/binaries/qxl-win-0.1012-20111107-ff93ec988c.zip and let windows update the driver. I rebooted when prompted. Once the windows logo stopped animating, I executed "virsh inject-nmi <name>". The blue crash dump screen appeared. Having unchecked "automatically restart" in "Properties of my computer" -> "Advanced" -> "Startup and Recovery", I let it go to 100% writing the crash dump, and then forced the virtual machine off, set Video to vga, and booted.

Attached is the resulting C:\Windows\MEMORY.DMP [77MB, Kernel memory only]
I have gzipped it. The compressed size is 18MB.

The file is also at http://xls01.freecult.org/750773/MEMORY.DMP.gz

Comment 15 Vadim Rozenfeld 2012-02-23 09:06:43 UTC
Thank you, Emanuel.

Looks weird. The running thread stuck in READ_PORT_UCHAR operation,
which is an extremely uncommon scenario.
Can you please trying to reproduce this problem with "-smp 2"?

Best regards,
Vadim.

Comment 16 Dennis Appelon Nielsen 2012-02-23 12:14:07 UTC
I have kind of the same problem, almost.

I have been using Win 7 64Bit with spice-server-0.10.1-1.fc16.x86_64 for some time now, and It works very well, but I really would like the Red Hat QXL GPU to have a correct driver installed sense that is the last bottleneck I have on performance. But the Windows 7 being  a 64bit which force drivers to be signed or system to be "hacked" have held me back, until I learned that I could build my Red Hat QXL GPU driver my self. So I went ahead and gave it a try. Most of it works just fine, following this guide here http://spice-space.org/page/WinQXL but when I then want to install the qxl.inf from device manager or with ppnutil.exe it give me a error saying "The system cannot find the file specified" and I have to click close...

I also tried to use the latest Red Hat QXL GPU driver from http://spice-space.org/download.html called spice-client-win32-0.6.3.zip and It installs, but then when I boot My system it hangs with the CPU/VPU at 100% and nothing happens. I have to boot and select last good known configuration, to get in to the Windows Client again...

Comment 17 Dennis Appelon Nielsen 2012-02-23 12:23:33 UTC
Here is the output from pnputil.exe 

c:\qxl\install_fre_win7_AMD64>pnputil.exe -i -a qxl.inf
Microsoft PnP Utility

Processing inf :            qxl.inf
Adding the driver package failed : The system cannot find the file specified.

Total attempted:              1
Number successfully imported: 0


c:\qxl\install_fre_win7_AMD64>

The same error I get from the GUI part??

Comment 18 Dennis Appelon Nielsen 2012-02-24 08:47:22 UTC
Her is a evening of work to get a memory dump from my stuck Win 7 64bit 

I used this tech guide from MS to make the nmi command work: http://support.microsoft.com/kb/927069

I discovered that when I try to install the QXL Red Hat. GPU driver with my Windows 7 64 bit. (My Win 7 64bit in test mode) hangs with the Windows Logo up on the next start up. I then did a 

virsh qemu-monitor-command Win7_64bit_Groupinfra.com --hmp nmi

To force a memory dump. Knowing that I would be able to start my Win 7 in "Last good know configuration" I then compressed (using XZ Utils) the memory dump and uploaded it to 2shared on this link: 

Mem dump part 1: http://www.2shared.com/file/fBMbiI3H/xaa.html 
Mem dump part 2: http://www.2shared.com/file/tsKGMrFz/xab.html

(I had to use split to split the dump in to 2 files under 200MB)

I use Fedora 16 x86_64 spice-server-0.10.1-1.fc16.x86_64 and the QXL driver from here http://spice-space.org/download/binaries/qxl-win-0.1012-20111107-ff93ec988c.zip 

When I install it and reboot the Win 7 hangs with the Windows boot logo and don't move on. and that is the point where I dump the memory...

Comment 19 Vadim Rozenfeld 2012-02-26 08:13:27 UTC
(In reply to comment #18)

Could you please try more recent vioserial driver from
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/ ?

It also will be interesting to see whether this problem is reproducible on
SMP guest.

Thanks,
Vadim.

Comment 20 Emanuel Rietveld 2012-02-27 12:57:46 UTC
In machine details -> Processor, I have set current allocation from 1 to 2 and started the VM. I then enabled nmicrashdump, installed the driver, rebooted. The system did not hang. Then I set current allocation from 2 back to 1, and rebooted. The system still does not hang.

I will test if I install the driver with 1 cpu, then reboot, make it hang, then increase the amount of cpus and see if it stops hanging. I just wanted to post the above preliminary results here because I'm not sure when I have time.

Comment 21 Emanuel Rietveld 2012-02-27 15:38:32 UTC
A machine that originally hangs also stops hanging once I give it two CPUs. If I then set it back to 1 CPU, it sometimes starts hanging again, sometimes keeps working.

Comment 22 Vadim Rozenfeld 2012-02-27 16:09:11 UTC
(In reply to comment #21)
> A machine that originally hangs also stops hanging once I give it two CPUs. If
> I then set it back to 1 CPU, it sometimes starts hanging again, sometimes keeps
> working.

Thank you. That explains a lot.
Just to be sure that we are on the same page, 
what is the driver version?

Comment 24 Fedora Admin XMLRPC Client 2012-03-15 17:55:36 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 25 Coolper 2012-03-16 01:56:34 UTC
I have kind of the same problem when I use "-smp 1,sockets=1,cores=1,threads=1" in command line,I get a crash dump file by "NMI crash dump" and analyze it with windbg,I get the same result as Emanuel Rietveld provided dump file(The running thread stuck in READ_PORT_UCHAR operation).
If I use "-smp 2,sockets=1,cores=2,threads=1" in command line, windows 7 will not hang.
Please point me which part of source code I should see.

Comment 26 Vadim Rozenfeld 2012-03-27 09:10:25 UTC
(In reply to comment #14)
> Created attachment 564965 [details]
> 
> Attached is the resulting C:\Windows\MEMORY.DMP [77MB, Kernel memory only]
> I have gzipped it. The compressed size is 18MB.
> 
> The file is also at http://xls01.freecult.org/750773/MEMORY.DMP.gz

I rechecked this crash dump again. There is one thing I would like to add to my
previous comment (#c15). The stuck is inside of netkvm driver. Adding Yan, because it might be interesting to him.

Comment 27 Yvugenfi@redhat.com 2012-03-27 13:13:38 UTC
(In reply to comment #26)
> (In reply to comment #14)
> > Created attachment 564965 [details]
> > 
> > Attached is the resulting C:\Windows\MEMORY.DMP [77MB, Kernel memory only]
> > I have gzipped it. The compressed size is 18MB.
> > 
> > The file is also at http://xls01.freecult.org/750773/MEMORY.DMP.gz
> 
> I rechecked this crash dump again. There is one thing I would like to add to my
> previous comment (#c15). The stuck is inside of netkvm driver. Adding Yan,
> because it might be interesting to him.

Can someone provide also the command line that was used to create the dump in comment #14?

I see that in the original report VM didn't have NIC at all and the second command line that appears in comment #10 has e1000 NIC.

Comment 28 Yvugenfi@redhat.com 2012-03-27 13:16:38 UTC
(In reply to comment #18)
> Her is a evening of work to get a memory dump from my stuck Win 7 64bit 
> 
> I used this tech guide from MS to make the nmi command work:
> 
> Mem dump part 1: http://www.2shared.com/file/fBMbiI3H/xaa.html 
> Mem dump part 2: http://www.2shared.com/file/tsKGMrFz/xab.html
> 
> (I had to use split to split the dump in to 2 files under 200MB)
> 
Any chance you can upload this dump again? The link is no loner valid. Please archive it before uploading.

Comment 29 Emanuel Rietveld 2012-03-28 08:50:48 UTC
I'm someone, and I also posted the crash dump in comment #14. Although I cannot verify if below was the same command line I used originally, this command line makes the issue reappear:

/usr/bin/qemu-kvm -S -M pc-0.14 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name win71 -uuid 8653612e-7329-f663-019a-b0fbf37ec1e5 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/win71.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/lib/libvirt/images/vms/win71.img,if=none,id=drive-ide0-0-0,format=qcow2,cache=writeback -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/lib/libvirt/images/iso/windows/virtio-win-0.1-22.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ae:b8:f8,bus=pci.0,addr=0x3 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5900,addr=127.0.0.1,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7

Comment 30 Emanuel Rietveld 2012-03-28 10:25:42 UTC
This is the shortest command line I can produce that reliably hangs. Making seemingly unrelated changes, like removing the USB tablet, removing the sound card, or removing the CD drive, makes it stop reliably reproducing the hang.

/usr/bin/qemu-kvm \
-M pc-0.14 \
-enable-kvm \
-m 1024 -smp 1,sockets=1,cores=1,threads=1 \
-nodefconfig \
-nodefaults \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 \
-drive file=/var/lib/libvirt/images/vms/win71.img,if=none,id=drive-ide0-0-0,format=qcow2,cache=writeback \
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 \
-drive file=/var/lib/libvirt/images/iso/windows/virtio-win-0.1-22.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw \
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 \
-chardev spicevmc,id=charchannel0,name=vdagent \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 \
-device usb-tablet,id=input0 \
-spice port=5900,addr=127.0.0.1,disable-ticketing \
-vga qxl \
-global qxl-vga.vram_size=67108864 \
-device intel-hda,id=sound0,bus=pci.0,addr=0x4 \
-device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0

Comment 31 Chad Feller 2012-04-05 20:26:37 UTC
I'm seeing this issue as well.  

Fedora 16 host, Win 7 64-bit guest.

Comment 32 Yvugenfi@redhat.com 2012-04-08 08:40:41 UTC
(In reply to comment #30)
> This is the shortest command line I can produce that reliably hangs. Making
> seemingly unrelated changes, like removing the USB tablet, removing the sound
> card, or removing the CD drive, makes it stop reliably reproducing the hang.

Can you look at device manager and see how interrupts are allocated in the scenarios with hangs? Check what devices are sharing the same interrupt.

Run devmgmt.msc -> View -> Resources by type -> Expand "Interrupt request (IRQ)"

Comment 33 Vadim Rozenfeld 2012-04-08 16:27:37 UTC
(In reply to comment #25)
> I have kind of the same problem when I use "-smp 1,sockets=1,cores=1,threads=1"
> in command line,I get a crash dump file by "NMI crash dump" and analyze it with
> windbg,I get the same result as Emanuel Rietveld provided dump file(The running
> thread stuck in READ_PORT_UCHAR operation).
> If I use "-smp 2,sockets=1,cores=2,threads=1" in command line, windows 7 will
> not hang.
> Please point me which part of source code I should see.

should be fixed in build25
 
http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/25/win/virtio-win-prewhql-0.1.zip
http://people.redhat.com/vrozenfe/build-25/virtio-win-prewhql-0.1.zip

Cheers,
Vadim.

Comment 34 nicolas 2012-04-12 14:36:01 UTC
Hello, 
I can confirm that's build25 seems fixed the bug
( vm guest : seven / XP  , qemu-kvm 1.0.1 )

Regards, 
Nicolas Prochazka.

Comment 35 Daniel Berrangé 2012-04-12 14:43:27 UTC
It wouldn't surprise me if all the random hangs seen with Win7 and KVM are due to clock timing issues. Windows 7 is particularly sensitive to inaccurate interrupt delivery for PIT/RTC. To obtain stability mgmt apps should set the following config params when running KVM guests:

<clock offset="utc">   (change 'utc' to 'localtime' for Windows)
   <timer name="rtc" tickpolicy="catchup"/>
   <timer name="pit" tickpolicy="delay"/>
</clock>

Although this is important for Windows, it doesn't harm other OS, so it is reasonable to set these for all.

Unfortunately, AFAIK, none of virt-install, virt-manager or GNOME Boxes currently set this.

Comment 36 Christophe Fergeau 2012-04-13 09:38:24 UTC
(In reply to comment #35)
> Unfortunately, AFAIK, none of virt-install, virt-manager or GNOME Boxes
> currently set this.

I've just filed https://bugzilla.gnome.org/show_bug.cgi?id=674035 for Boxes.

Comment 37 Emanuel Rietveld 2012-04-23 11:46:34 UTC
I no longer have any problems. However, I have both enabled Daniel Berrange's suggestion and installed the build-26 prewhql drivers, and I am not sure which of these has resolved my issues.

Comment 38 Fedora End Of Life 2013-01-16 20:41:18 UTC
This message is a reminder that Fedora 16 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 16. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '16'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 16's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 16 is end of life. If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora, you are encouraged to click on 
"Clone This Bug" and open it against that version of Fedora.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 39 Cole Robinson 2013-02-11 22:27:58 UTC
It sounds like this was solved at some point, so closing. If anyone can still reproduce on F18, please open a new report and we can go from there.


Note You need to log in before you can comment on or make changes to this bug.