Bug 823818 - [virtio-win][compatibilty]Guest freeze After terminating the virtserialport receiving side in the host
[virtio-win][compatibilty]Guest freeze After terminating the virtserialport r...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virtio-win (Show other bugs)
6.3
Unspecified Unspecified
urgent Severity urgent
: rc
: ---
Assigned To: Gal Hammer
Virtualization Bugs
: ZStream
: 915846 949135 968545 (view as bug list)
Depends On:
Blocks: 896690 964070 976310
  Show dependency treegraph
 
Reported: 2012-05-22 04:57 EDT by Yang Zhao
Modified: 2017-02-09 21:01 EST (History)
36 users (show)

See Also:
Fixed In Version: virtio-win-prewhql-0.1-59
Doc Type: Bug Fix
Doc Text:
Previously, a guest became unresponsive after the virtqueue was full. This caused a bug check to occur, and the netkvm driver wrote to the read-only Interrupt Service Routine (ISR) register. The write caused QEMU to respond with a "virtio_ioport_write: unexpected address 0x13 value 0x0" error message. This update corrects this issue on the dst-host after termination of the listening side of a virtserialport host, and interruptions and error messages no longer occur in the described scenario.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-21 18:56:40 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
strace output of qemu process of hung Windows guest (717.65 KB, application/octet-stream)
2013-04-01 22:33 EDT, Colin Coe
no flags Details
Image of device manager (22.36 KB, application/octet-stream)
2013-04-07 19:20 EDT, Colin Coe
no flags Details
image showing username and reported errors (203.02 KB, application/octet-stream)
2013-04-07 21:00 EDT, Colin Coe
no flags Details
loader.exe error when try to reproduce this issue (8.09 KB, image/png)
2013-06-27 22:25 EDT, IBM Bug Proxy
no flags Details
Add HyperV relaxed feature hook for VDSM (599 bytes, text/x-python)
2013-06-27 22:25 EDT, IBM Bug Proxy
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 344933 None None None Never

  None (edit)
Description Yang Zhao 2012-05-22 04:57:16 EDT
Description of problem:
on dst-host, execute Ctrl+c to stop monitor,and then guest freezed and in qemu it displays 'virtio_ioport_write: unexpected address 0x13 value 0x0'

Version-Release number of selected component (if applicable):
src-host: qemu-kvm:qemu-kvm-0.12.1.2-2.209.el6_2.5.x86_64
          kernel:kernel-2.6.32-220.el6.x86_64
          seabios:seabios-0.6.1.2-8.el6.x86_64
dst-host: qemu-kvm-rhev:qemu-kvm-rhev-0.12.1.2-2.293.el6.x86_64
          kernel:kernel-2.6.32-272.el6.x86_64
          seabios:seabios-0.6.1.2-19.el6.x86_64

How reproducible:
sometimes, not always

Steps to Reproduce:
1.start guest in src-host and des-host with following commandline:
(src-host)/usr/libexec/qemu-kvm -cpu cpu64-rhel6,+x2apic,family=0xf -smp 4 -m 4G -device virtio-balloon-pci,id=balloon0 -k en-us -usb -device usb-tablet,id=tablet0 -device intel-hda,id=sound0 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -drive file=win2k8-64-0507.raw,format=raw,cache=none,if=none,werror=stop,rerror=stop,id=drive-disk0,media=disk -device virtio-blk-pci,drive=drive-disk0,id=disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:12:2a:b2:11:8a -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0 -chardev socket,id=channel0,path=/tmp/tty1,server,nowait -device virtserialport,chardev=channel0,name=org.linux-kvm.port.0,bus=virtio-serial0.0,id=port0 -rtc base=utc,clock=host,driftfix=slew -name win2k8 -spice port=5937,disable-ticketing -vga qxl -uuid e0ffa6c6-8dd1-4cd0-8a99-42afa0414034 -monitor stdio -M rhel6.2.0

(dst-host)/usr/libexec/qemu-kvm -cpu cpu64-rhel6,+x2apic,family=0xf -smp 4 -m 4G -device virtio-balloon-pci,id=balloon0 -k en-us -usb -device usb-tablet,id=tablet0 -device intel-hda,id=sound0 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -drive file=win2k8-64-0507.raw,format=raw,cache=none,if=none,werror=stop,rerror=stop,id=drive-disk0,media=disk -device virtio-blk-pci,drive=drive-disk0,id=disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:12:2a:b2:11:8a -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0 -chardev socket,id=channel0,path=/tmp/tty1,server,nowait -device virtserialport,chardev=channel0,name=org.linux-kvm.port.0,bus=virtio-serial0.0,id=port0 -rtc base=utc,clock=host,driftfix=slew -name win2k8 -spice port=5937,disable-ticketing -vga qxl -uuid e0ffa6c6-8dd1-4cd0-8a99-42afa0414034 -monitor stdio -M rhel6.2.0 -incomig tcp:0:5888
2.in guest with cygwin execute:
  for((i=0;i<10000000;i++)
   do
   python VirtioChannel_Guest_send.py
   echo $i
   done
3.in host, execute:
  while true
   do
   python serial-host-send.py
   echo 'date'
   done
4.migrate from host to external host
  migrate -d tcp:$IP;$port
5.in des host,execute
 nc -U /tmp/tty1
6.ctrl+c  nc -U /tmp/tty1 

Actual result:
guest is freezed after terminate the listening side of virtserialport on RHEL6.3.0 host
it displays'virtio_ioport_write: unexpected address 0x13 value 0x0' in qemu-monitor 

Expected result: 
guest keeps running.

Additional info:
Comment 2 Mike Cao 2012-05-22 05:32:53 EDT
This might be virtio-win bug ,move to virtio-win component 

the virtio-win is 1.4.1
Comment 3 Vadim Rozenfeld 2012-05-22 06:51:43 EDT
Is it reproducible with the most recent drivers?

Thanks,
Vadim.
Comment 4 Mike Cao 2012-05-22 07:01:15 EDT
(In reply to comment #3)
> Is it reproducible with the most recent drivers?
> 
> Thanks,
> Vadim.

Functional Test for virtio-win-1.5.1 is ongoing .if we will update status if we hit this bug on virtio-win-1.5.1.

Mike
Comment 5 Mike Cao 2012-07-16 23:29:14 EDT
Zhao Yang 

Pls try this issue on build 30 to see whether we still can reproduce it .
Comment 6 Yang Zhao 2012-07-23 23:30:09 EDT
Hi Mike,
  Reproduce this issue with new build (virito-win-prewhql-30) and old build(virito-win1.4.0) on guest win2k8-64. Following is the details for test:

environment:
src-host: qemu-kvm:qemu-kvm-0.12.1.2-2.209.el6_2.5.x86_64
          kernel:kernel-2.6.32-220.el6.x86_64
          seabios:seabios-0.6.1.2-8.el6.x86_64
dst-host: qemu-kvm-rhev:qemu-kvm-rhev-0.12.1.2-2.293.el6.x86_64
          kernel:kernel-2.6.32-272.el6.x86_64
          seabios:seabios-0.6.1.2-19.el6.x86_64
steps:
1.start guest in src-host and des-host with following commandline:
(src-host)/usr/libexec/qemu-kvm -cpu cpu64-rhel6,+x2apic,family=0xf -smp 4 -m 4G -device virtio-balloon-pci,id=balloon0 -k en-us -usb -device usb-tablet,id=tablet0 -drive file=win2k8-64-bugverify.raw,format=raw,cache=none,if=none,werror=stop,rerror=stop,id=drive-disk0,media=disk -device virtio-blk-pci,drive=drive-disk0,id=disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:12:2a:00:11:02 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0 -chardev socket,id=channel0,path=/tmp/tty1,server,nowait -device virtserialport,chardev=channel0,name=org.linux-kvm.port.0,bus=virtio-serial0.0,id=port0 -rtc base=utc,clock=host,driftfix=slew -name win2k8 -spice port=5931,disable-ticketing -vga qxl -uuid 1eddd2cb-19db-4aab-8026-b6e9155c5c2a -monitor stdio -M rhel6.2.0 

(dst-host)/usr/libexec/qemu-kvm -cpu cpu64-rhel6,+x2apic,family=0xf -smp 4 -m 4G -device virtio-balloon-pci,id=balloon0 -k en-us -usb -device usb-tablet,id=tablet0 -drive file=win2k8-64-bugverify.raw,format=raw,cache=none,if=none,werror=stop,rerror=stop,id=drive-disk0,media=disk -device virtio-blk-pci,drive=drive-disk0,id=disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:12:2a:00:11:02 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0 -chardev socket,id=channel0,path=/tmp/tty1,server,nowait -device virtserialport,chardev=channel0,name=org.linux-kvm.port.0,bus=virtio-serial0.0,id=port0 -rtc base=utc,clock=host,driftfix=slew -name win2k8 -spice port=5931,disable-ticketing -vga qxl -uuid 1eddd2cb-19db-4aab-8026-b6e9155c5c2a -monitor stdio -M rhel6.2.0 -incoming tcp:0:5880

2.in guest with cygwin execute:
  for((i=0;i<10000000;i++)
   do
   python VirtioChannel_Guest_send.py org.linux-kvm.port.0
   echo $i
   done
3.in host, execute:
  while true
   do
   python serial-host-receive.py /tmp/tty1
   echo 'date'
   done
4.migrate from host to external host
  migrate -d tcp:$IP;$port
5.in des host,execute
 nc -U /tmp/tty1
6.ctrl+c  nc -U /tmp/tty1 

result:
guest is freezed after terminate the listening side of virtserialport on RHEL6.3.0 host,and it displays'virtio_ioport_write: unexpected address 0x13 value 0x0' in qemu-monitor 

Additional info:The transfer file's size is about 1M. It can not hit this issue with too small file.


Best Regards,
Yang Zhao
Comment 7 Vadim Rozenfeld 2012-08-14 00:51:11 EDT
 
could you please try reproducing this problem with the latest driver, available at http://download.devel.redhat.com/brewroot/work/tasks/5605/4755605/virtio-win-prewhql-0.1.zip ?

Thank you,
Vadim.
Comment 8 Yang Zhao 2012-08-16 03:18:04 EDT
(In reply to comment #7)
>  
> could you please try reproducing this problem with the latest driver,
> available at
> http://download.devel.redhat.com/brewroot/work/tasks/5605/4755605/virtio-win-
> prewhql-0.1.zip ?
> 
> Thank you,
> Vadim.

Hi Vadim:
   I still can reproduce it with the latest driver(virtio-win-prewhql-33).

environment:
src-host: qemu-kvm:qemu-kvm-0.12.1.2-2.302.el6.x86_64
          kernel:kernel-2.6.32-294.el6.x86_64
          seabios:seabios-0.6.1.2-19.el6.x86_64
dst-host: qemu-kvm-rhev:qemu-kvm-rhev-0.12.1.2-2.295.el6.x86_64
          kernel:kernel-2.6.32-272.el6.x86_64
          seabios:seabios-0.6.1.2-19.el6.x86_64

steps are based on comment 6.

Thanks,
Yang Zhao
Comment 9 Mike Cao 2012-08-16 03:20:09 EDT
According to comment #8 ,re-assign this bug .
Comment 10 Vadim Rozenfeld 2012-09-10 03:31:12 EDT
Please try reproducing the problem with new drivers
http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/35/win/virtio-win-prewhql-0.1.zip

Thank you,
Vadim.
Comment 12 Mike Cao 2012-10-30 03:39:20 EDT
dengmin ,pls verify this bug
Comment 13 Min Deng 2012-10-30 22:49:39 EDT
(In reply to comment #10)
> Please try reproducing the problem with new drivers
> http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/35/
> win/virtio-win-prewhql-0.1.zip
> 
> Thank you,
> Vadim.

Hi Vadim,

   QE still can reproduce the issue via build 41 and could you please double check the issue ?

Thanks 
Min
Comment 14 Ronen Hod 2012-12-18 10:49:08 EST
Postponed to RHEL6.5 since we are done with 6.4 WHQL.
It might be that it was solved in build 48.
Comment 15 Min Deng 2013-01-11 04:06:20 EST
(In reply to comment #14)
> Postponed to RHEL6.5 since we are done with 6.4 WHQL.
> It might be that it was solved in build 48.

The testing result of the bug via virtio-win-prewhql-0.1-49
Build info,
src:qemu-img-rhev-0.12.1.2-2.295.el6_3.10.x86_64
    kernel-2.6.32-279.20.1.el6.x86_64
des:kernel-2.6.32-353.el6.x86_64
    qemu-kvm-rhev-0.12.1.2-2.351.el6.x86_64

Actual results,QE still can reproduce the issue.So please double check the issue.
Comment 18 Gal Hammer 2013-03-11 05:51:35 EDT
The bug is not not related to migration. I was able to reproduce it using:

> type 1.bat
:start
copy 1.log \\Global\.\org.linux-kvm.port.0
goto start

The 1.log file is a ~4M text file.

The guest is hung after the virtqueue become full. This cause a bug check to occur and the netkvm driver writes to the read-only isr register. The write causes qemu to complain with "virtio_ioport_write: unexpected address 0x13 value 0x0" error.
Comment 19 Colin Coe 2013-03-31 09:18:28 EDT
This sounds similar to what we're seeing with our Windows guests in RHEV 3.1 with fat RHEL 6.4 hypervisors.

Is virtio-win v1.6.3 expected to resolve this?

Thanks

CC
Comment 20 Mike Cao 2013-03-31 11:28:03 EDT
(In reply to comment #19)
> This sounds similar to what we're seeing with our Windows guests in RHEV 3.1
> with fat RHEL 6.4 hypervisors.
> 
> Is virtio-win v1.6.3 expected to resolve this?
> 
> Thanks
> 
> CC

Hello Colin

Can you describe what you hit ? I assume you hit https://bugzilla.redhat.com/show_bug.cgi?id=876982

Mike
Comment 21 Colin Coe 2013-03-31 18:35:01 EDT
Hi Mike

I can't view that BZ as I'm not authorized.

Our Windows (XP,7, Serv 2008R2, Serv 2012) VMs randomly become non-responsive.

On advise from GSS we setup a VB job that runs every minute and appends a date stamp to a file.  We found that even when the VMs we non-responsive, this file still got written to.

When the VMs are non-responsive, they appear completely dead in the water.  They cannot be pinged or other wise accessed.  The SPICE console shows a frozen GUI.

They only way to get them back is to 'power cycle' them.

This has been happening since around the RHEV 3.0.5.  We didn't see this at all in RHEV 2.1 or RHEV 2.2 and I don't recall this occurring in early RHEV 3.0 releases.

CC
Comment 22 Colin Coe 2013-03-31 19:03:54 EDT
Re-reading the above, I wasn't all that clear.

Although the VMs appear completely dead, they are obviously not as that file continued to be written too.  Also, doing an 'strace -p <PID>' on the qemu process ID of the frozen guest showed a lot of activity.

We've never seen this on our Linux (all RHEL) VMs.

Thanks

CC
Comment 23 Mike Cao 2013-03-31 20:24:54 EDT
(In reply to comment #21)
> Hi Mike
> 
> I can't view that BZ as I'm not authorized.
> 
> Our Windows (XP,7, Serv 2008R2, Serv 2012) VMs randomly become
> non-responsive.

Hi,Colin

I have no permission to make the bug open to public .you can make GSS to make it .
That Bug is about Guest Locks ups when running a Windows 7 64 Bit guest on RHEL 6.4 Host,and it has been fixed qemu-kvm-0.12.1.2-2.353.el6.

Could you check following infos:
1.qemu-kvm-rhev version in hypervisor
2.spice version in the guest
3.spice-agent version in the guest
4.virito-serial version you are using 
4.using #ps aux|grep qemu-kvm on the hypervisor to show the commandline 
5.Can you reproduce when using vnc ?
6.Can your reproduce when disable spice-vdagent service ? 

Thanks,
Mike
> 
> On advise from GSS we setup a VB job that runs every minute and appends a
> date stamp to a file.  We found that even when the VMs we non-responsive,
> this file still got written to.
> 
> When the VMs are non-responsive, they appear completely dead in the water. 
> They cannot be pinged or other wise accessed.  The SPICE console shows a
> frozen GUI.
> 
> They only way to get them back is to 'power cycle' them.
> 
> This has been happening since around the RHEV 3.0.5.  We didn't see this at
> all in RHEV 2.1 or RHEV 2.2 and I don't recall this occurring in early RHEV
> 3.0 releases.
> 
> CC

(In reply to comment #21)
> Hi Mike
> 
> I can't view that BZ as I'm not authorized.
> 
> Our Windows (XP,7, Serv 2008R2, Serv 2012) VMs randomly become
> non-responsive.
> 
> On advise from GSS we setup a VB job that runs every minute and appends a
> date stamp to a file.  We found that even when the VMs we non-responsive,
> this file still got written to.
> 
> When the VMs are non-responsive, they appear completely dead in the water. 
> They cannot be pinged or other wise accessed.  The SPICE console shows a
> frozen GUI.
> 
> They only way to get them back is to 'power cycle' them.
> 
> This has been happening since around the RHEV 3.0.5.  We didn't see this at
> all in RHEV 2.1 or RHEV 2.2 and I don't recall this occurring in early RHEV
> 3.0 releases.
> 
> CC
Comment 24 Colin Coe 2013-04-01 01:26:55 EDT
Hi Mike

qemu-kvm-rhev-0.12.1.2-2.355.el6_4.2.x86_64

SPICE Agent v3.0.4

Not sure how to get the SPICE version...

virtio-serial is from virtio-win-1.6.3

I'll post the command line from the hypervisor when I can.

I'll look into points 5 & 6 

CC
Comment 25 Colin Coe 2013-04-01 02:37:00 EDT
qemu     15466     1 14 Mar30 ?        05:58:37 /usr/libexec/qemu-kvm -name bentrd1p -S -M rhel6.3.0 -cpu Westmere -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid da4468a3-f714-45bd-b895-95d8aa41061b -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6Server-6.4.0.4.el6,serial=37333036-3831-4753-4831-323758533153_00:17:a4:77:14:0a,uuid=da4468a3-f714-45bd-b895-95d8aa41061b -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/bentrd1p.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-03-30T21:15:55,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/829c659f-7cd3-44e9-b641-6f1b0b0b225b/9ff0d28e-ec48-433a-ae1b-22252f9e8425/images/560ade7b-a4a6-4d3e-b49b-debeba2fd973/a560142e-96f1-46d1-a8d4-3b612864dbd7,if=none,id=drive-virtio-disk0,format=qcow2,serial=560ade7b-a4a6-4d3e-b49b-debeba2fd973,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=88,id=hostnet0,vhost=on,vhostfd=91 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:6a:0a,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/bentrd1p.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/bentrd1p.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5964,tls-port=5965,addr=172.22.106.134,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.vram_size=67108864
Comment 26 Mike Cao 2013-04-01 04:24:23 EDT
Hi,Colin

Can you provide virtio-serial version by following way ?
in commandline #devmgmt.msc--->Right-Click "VirtIO-Serial Driver"-->Properties--->Driver Version in Driver tab

Thanks,
Mike
Comment 27 Colin Coe 2013-04-01 04:59:13 EDT
52.63.103.3000
Comment 28 Mike Cao 2013-04-01 05:00:40 EDT
(In reply to comment #27)
> 52.63.103.3000

You are not using the latest virtio-win drivers ,pls reinstall it from virtio-win-1.6.3 (the version should be XX.XX.XX.4900) 

Mike
Comment 29 Colin Coe 2013-04-01 05:23:48 EDT
now at 52.64.104.4900.

will monitor and report back.
Comment 30 Colin Coe 2013-04-01 22:31:43 EDT
One of our Windows VMs has just hung again.

qemu      7024     1 55 08:14 ?        01:06:41 /usr/libexec/qemu-kvm -name benfep2p -S -M rhel6.3.0 -cpu Westmere -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid fb1a3e35-ce54-4888-90e5-a60a79ed4670 -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6Server-6.4.0.4.el6,serial=37333036-3831-4753-4831-323758533153_00:17:a4:77:14:0a,uuid=fb1a3e35-ce54-4888-90e5-a60a79ed4670 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/benfep2p.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-04-01T23:15:01,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/rhev/data-center/829c659f-7cd3-44e9-b641-6f1b0b0b225b/02ea47c8-2d0d-4bc5-8004-b8cd261c1dab/images/11111111-1111-1111-1111-111111111111/virtio-win.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/829c659f-7cd3-44e9-b641-6f1b0b0b225b/9ff0d28e-ec48-433a-ae1b-22252f9e8425/images/155045b3-34b3-4b6c-899b-9bd392db6c96/3673611e-177b-474e-86fe-fbc91e56e9de,if=none,id=drive-virtio-disk0,format=qcow2,serial=155045b3-34b3-4b6c-899b-9bd392db6c96,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=42,id=hostnet0,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:6a:09,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/benfep2p.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/benfep2p.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5908,tls-port=5909,addr=172.22.106.135,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49156

I did a 'strace -o /tmp/benfep2p.strace -p 7024' for a second or so and got 10000 or so lines of output. File attached.

RHEV-M is reporting the following under the VMs Applications tab:
RHEV-Agent64   - 3.0.3
RHEV-Balloon64 - (no version reported)
RHEV-Block64   - 3.1.3
RHEV-Network64 - 3.1.4
RHEV-Serial64  - 3.1.5
RHEV-Tools     - 3.1.9

I wasn't able to get connect to the SPICE console .

I've confirmed that this has has driver version 52.64.104.4900 for the virtio-serial which is dated 29/11/2012.

Thanks

CC
Comment 31 Colin Coe 2013-04-01 22:33:16 EDT
Created attachment 730535 [details]
strace output of qemu process of hung Windows guest
Comment 32 Colin Coe 2013-04-03 01:07:58 EDT
We're happy to do beta testing if required, or provide additional info.

Also, would disabling the virtio-serial device in Device Manager prevent the system hangs?  This is a huge problem for us as it is making our entire RHEV based virtualisation environment look unstable.

CC
Comment 33 Mike Cao 2013-04-03 02:36:36 EDT
(In reply to comment #32)
> We're happy to do beta testing if required, or provide additional info.
> 
> Also, would disabling the virtio-serial device in Device Manager prevent the
> system hangs?  This is a huge problem for us as it is making our entire RHEV
> based virtualisation environment look unstable.
> 
> CC

Hello Colin

Thanks for your cooperation!
I am installing one RHEVM to reproduce it right now .Can you disable spice-vdagent& and RHEV-Agent Service in the guest (run--->services.msc--->Spice-Vdagent Service-->Disable) to check whether it will hit again ?

Best Regards,
Mike
Comment 34 Tony Li 2013-04-03 05:11:02 EDT
Mike,

We trying a patch base following:
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=867366

Best,
Tony
Comment 35 Tony Li 2013-04-03 05:14:31 EDT
--- a/hw/virtio-serial-bus.c	2013-03-23 16:40:58.000000000 +0800
+++ b/hw/virtio-serial-bus.c	2013-03-23 20:12:12.000000000 +0800
@@ -21,6 +21,12 @@
 #include "sysbus.h"
 #include "virtio-serial.h"
 
+static inline QEMUTimer *qemu_new_timer_ns(QEMUClock *clock, QEMUTimerCB *cb,
+                                           void *opaque)
+{
+    return qemu_new_timer(clock, cb, opaque);
+}
+
 /* The virtio-serial bus on top of which the ports will ride as devices */
 struct VirtIOSerialBus {
     BusState qbus;
@@ -51,6 +57,15 @@
     struct virtio_console_config config;
 
     bool flow_control;
+
+    struct {
+        QEMUTimer *timer;
+        int nr_active_ports;
+        struct {
+            VirtIOSerialPort *port;
+            uint8_t host_connected;
+        } *connected;
+    } post_load;
 };
 
 static VirtIOSerialPort *find_port_by_id(VirtIOSerial *vser, uint32_t id)
@@ -670,6 +685,35 @@
     }
 }
 
+static void virtio_serial_post_load_timer_cb(void *opaque)
+{
+    int i;
+    VirtIOSerial *s = opaque;
+    VirtIOSerialPort *port;
+    uint8_t host_connected;
+    VirtIOSerialPortInfo *info;
+
+    for (i = 0 ; i < s->post_load.nr_active_ports; ++i) {
+        port = s->post_load.connected[i].port;
+        host_connected = s->post_load.connected[i].host_connected;
+        if (host_connected != port->host_connected) {
+            /*
+             * We have to let the guest know of the host connection
+             * status change
+             */
+            send_control_event(port, VIRTIO_CONSOLE_PORT_OPEN,
+                               port->host_connected);
+        }
+        info = DO_UPCAST(VirtIOSerialPortInfo, qdev, port->dev.info);
+        if (port->guest_connected && info->guest_open) {
+            /* replay guest open */
+            info->guest_open(port);
+        }
+    }
+    g_free(s->post_load.connected);
+    s->post_load.connected = NULL;
+}
+
 static int virtio_serial_load(QEMUFile *f, void *opaque, int version_id)
 {
     VirtIOSerial *s = opaque;
@@ -722,10 +766,13 @@
 
     qemu_get_be32s(f, &nr_active_ports);
 
+    s->post_load.nr_active_ports = nr_active_ports;
+    s->post_load.connected =
+        g_malloc0(sizeof(*s->post_load.connected) * nr_active_ports);
+
     /* Items in struct VirtIOSerialPort */
     for (i = 0; i < nr_active_ports; i++) {
         uint32_t id;
-        bool host_connected;
         VirtIOSerialPortInfo *info;
 
         id = qemu_get_be32(f);
@@ -740,15 +787,9 @@
             /* replay guest open */
             info->guest_open(port);
         }
-        host_connected = qemu_get_byte(f);
-        if (host_connected != port->host_connected) {
-            /*
-             * We have to let the guest know of the host connection
-             * status change
-             */
-            send_control_event(port, VIRTIO_CONSOLE_PORT_OPEN,
-                               port->host_connected);
-        }
+
+        s->post_load.connected[i].port = port;
+        s->post_load.connected[i].host_connected = qemu_get_byte(f);
 
         if (version_id > 2) {
             uint32_t elem_popped;
@@ -773,6 +814,7 @@
             }
         }
     }
+    qemu_mod_timer(s->post_load.timer, 1);
     return 0;
 }
 
@@ -1029,6 +1071,9 @@
     register_savevm(dev, "virtio-console", -1, savevm_ver, virtio_serial_save,
                     virtio_serial_load, vser);
 
+    vser->post_load.timer = qemu_new_timer_ns(vm_clock,
+            virtio_serial_post_load_timer_cb, vser);
+
     return vdev;
 }
 
@@ -1042,5 +1087,8 @@
     qemu_free(vser->ovqs);
     qemu_free(vser->ports_map);
 
+    g_free(vser->post_load.connected);
+    qemu_free_timer(vser->post_load.timer);
+
     virtio_cleanup(vdev);
 }
Comment 39 Vadim Rozenfeld 2013-04-04 04:48:47 EDT
From dump file analysis, it looks like the user mode code tries to write to virtio serial port when associated with the port transmitting virtio queue has gone already. IIRC we have meet this problem before.
Comment 40 Colin Coe 2013-04-04 18:12:19 EDT
In reply to comment 33 (https://bugzilla.redhat.com/show_bug.cgi?id=823818#c33)

I don't see SPCIE-VDAgent in the services list, only "RHEV Agent" and "RHEV Spice Agent".

Thanks

CC
Comment 41 Mike Cao 2013-04-05 10:33:47 EDT
(In reply to comment #40)
> In reply to comment 33
> (https://bugzilla.redhat.com/show_bug.cgi?id=823818#c33)
> 
> I don't see SPCIE-VDAgent in the services list, only "RHEV Agent" and "RHEV
> Spice Agent".
> 
> Thanks
> 
> CC

RHEV Spice Agent
Comment 42 Mike Cao 2013-04-07 03:13:28 EDT
Reproduced this issue on virtio-win-1.6.3
steps same as comment #18
Grant qa_ack
Comment 43 Colin Coe 2013-04-07 04:50:31 EDT
How can I reproduce the bug? Following comment 18, \\global is interpreted as a UNC path.
Comment 44 Mike Cao 2013-04-07 04:57:26 EDT
(In reply to comment #43)
> How can I reproduce the bug? Following comment 18, \\global is interpreted
> as a UNC path.

1.Stop RHEV-Agent Service in the guest 
2.:start
copy 1.log \\.\Global\.\org.linux-kvm.port.0
goto start
Comment 45 Colin Coe 2013-04-07 18:55:19 EDT
Just trying this now...

---
C:\>dir 1.log
 Volume in drive C has no label.
 Volume Serial Number is 70F5-1531

 Directory of C:\

12/06/2012  03:31 PM             1,697 1.log
               1 File(s)          1,697 bytes
               0 Dir(s)  21,304,307,712 bytes free

C:\>copy 1.log \\.\Global\.\org.linux-kvm.port.0
The system cannot find the file specified.
        0 file(s) copied.

C:\>dir \\.\Global\.\org.linux-kvm.port.*
The filename, directory name, or volume label syntax is incorrect.

C:\>
---

Any ideas what I'm doing wrong?
Comment 46 Colin Coe 2013-04-07 19:20:57 EDT
Created attachment 732468 [details]
Image of device manager
Comment 47 Colin Coe 2013-04-07 19:23:27 EDT
Additionally, on the virtio-win-1.6.3 ISO, I found a vioser-test.exe.  Running this I see:
---
D:\Drivers\vioserial\xp\x86>vioser-test.exe
Running in non-blocking mode.
Cannot find vioserial device. \\?\{6fde7547-1b65-48ae-b628-80be62016026}#vioserialport#4&263e99fc&0&01#{6fde7521-1b65-48ae-b628-80be62016026} , error = 5

D:\Drivers\vioserial\xp\x86>
---

Please see attached image file (guest-devmgmt) which shows the Device Manager in the guest with the option 'Show Hidden Devices' selected.

Under 'System Devices', 'VirtIO-Serial Driver' is visible and being reported as 'working properly'.

Thanks
Comment 48 Vadim Rozenfeld 2013-04-07 20:37:01 EDT
(In reply to comment #47)
> Additionally, on the virtio-win-1.6.3 ISO, I found a vioser-test.exe. 
> Running this I see:
> ---
> D:\Drivers\vioserial\xp\x86>vioser-test.exe
> Running in non-blocking mode.
> Cannot find vioserial device.
> \\?\{6fde7547-1b65-48ae-b628-
> 80be62016026}#vioserialport#4&263e99fc&0&01#{6fde7521-1b65-48ae-b628-
> 80be62016026} , error = 5
> 

error 5 means access denied. are running with admin privileges?

> D:\Drivers\vioserial\xp\x86>
> ---
> 
> Please see attached image file (guest-devmgmt) which shows the Device
> Manager in the guest with the option 'Show Hidden Devices' selected.
> 
> Under 'System Devices', 'VirtIO-Serial Driver' is visible and being reported
> as 'working properly'.
> 
> Thanks
Comment 49 Vadim Rozenfeld 2013-04-07 20:41:11 EDT
(In reply to comment #45)
> Just trying this now...
> 
> ---
> C:\>dir 1.log
>  Volume in drive C has no label.
>  Volume Serial Number is 70F5-1531
> 
>  Directory of C:\
> 
> 12/06/2012  03:31 PM             1,697 1.log
>                1 File(s)          1,697 bytes
>                0 Dir(s)  21,304,307,712 bytes free
> 
> C:\>copy 1.log \\.\Global\.\org.linux-kvm.port.0
> The system cannot find the file specified.
>         0 file(s) copied.
> 
> C:\>dir \\.\Global\.\org.linux-kvm.port.*
> The filename, directory name, or volume label syntax is incorrect.
> 
Virtio-serial is not a file system driver. dir will not work. 

> C:\>
> ---
> 
> Any ideas what I'm doing wrong?
Comment 50 Colin Coe 2013-04-07 20:57:51 EDT
Hi

Definitely logged in administrator (tried domain admin and local admin).  Please see attached image.

Thanks
Comment 51 Colin Coe 2013-04-07 21:00:17 EDT
Created attachment 732501 [details]
image showing username and reported errors
Comment 52 Vadim Rozenfeld 2013-04-07 21:16:56 EDT
(In reply to comment #51)
> Created attachment 732501 [details]
> image showing username and reported errors

yes, i see. thanks
actually, I was referencing to comment 48, with a test app.
vioser-test is a very simple app, it just tries to find a virtio-serial
device in a system, and then it just peaks the first port it can find.
if this port was already open by any other app - vioser-test will fail
to open this port, because port can be open exclusively only.

Btw, you can try WinObj (http://technet.microsoft.com/en-us/sysinternals/bb896657.aspx) to see if the port exist and the number 
of open handles/references to this port.
Comment 53 Mike Cao 2013-04-07 21:45:15 EDT
(In reply to comment #45)
> Just trying this now...
> 
> ---
> C:\>dir 1.log
>  Volume in drive C has no label.
>  Volume Serial Number is 70F5-1531
> 
>  Directory of C:\
> 
> 12/06/2012  03:31 PM             1,697 1.log
>                1 File(s)          1,697 bytes
>                0 Dir(s)  21,304,307,712 bytes free
> 
> C:\>copy 1.log \\.\Global\.\org.linux-kvm.port.0
> The system cannot find the file specified.
>         0 file(s) copied.
> 
> C:\>dir \\.\Global\.\org.linux-kvm.port.*
> The filename, directory name, or volume label syntax is incorrect.
> 
> C:\>
> ---
> 
> Any ideas what I'm doing wrong?


Could you show me your commandline ?

sorry ,the correction should be 

1.Stop RHEV-Agent Service in the guest 
2.:start
copy 1.log \\.\Global\.\com.redhat.rhevm.vdsm
goto start
Comment 54 Mike Cao 2013-04-07 21:48:53 EDT
(In reply to comment #52)
> (In reply to comment #51)
> > Created attachment 732501 [details]
> > image showing username and reported errors
> 
> yes, i see. thanks
> actually, I was referencing to comment 48, with a test app.
> vioser-test is a very simple app, it just tries to find a virtio-serial
> device in a system, and then it just peaks the first port it can find.
> if this port was already open by any other app - vioser-test will fail
> to open this port, because port can be open exclusively only.
> 
> Btw, you can try WinObj
> (http://technet.microsoft.com/en-us/sysinternals/bb896657.aspx) to see if
> the port exist and the number 
> of open handles/references to this port.

Vadim

I think vioser-test can find some bugs too,at least it can not make the guest bsod or hang (as we knows ,it use virtio-serial channels ) ,I will go through the vioser-test bugs ,and reopen the urgent one .

Best Regards,
Mike
Comment 55 Vadim Rozenfeld 2013-04-07 22:07:38 EDT
(In reply to comment #54)
> (In reply to comment #52)
> > (In reply to comment #51)
> > > Created attachment 732501 [details]
> > > image showing username and reported errors
> > 
> > yes, i see. thanks
> > actually, I was referencing to comment 48, with a test app.
> > vioser-test is a very simple app, it just tries to find a virtio-serial
> > device in a system, and then it just peaks the first port it can find.
> > if this port was already open by any other app - vioser-test will fail
> > to open this port, because port can be open exclusively only.
> > 
> > Btw, you can try WinObj
> > (http://technet.microsoft.com/en-us/sysinternals/bb896657.aspx) to see if
> > the port exist and the number 
> > of open handles/references to this port.
> 
> Vadim
> 
> I think vioser-test can find some bugs too,at least it can not make the
> guest bsod or hang (as we knows ,it use virtio-serial channels ) ,I will go
> through the vioser-test bugs ,and reopen the urgent one .
> 
Sure.
We need to spend some time and build an useful test utility from vioser-test app.
Best regards,
Vadim.
> Best Regards,
> Mike
Comment 58 Mike Cao 2013-04-18 05:56:34 EDT
Reproduced this issue on virtio-win-prewhql-49 over winxp platform
Verified this issue on virtio-win-prewhql-59 

Steps same as comment #18

Actual Results:
on virtio-win-prewhql-49 ,guest hang 
on virtio-win-prewhql-64 ,guest works fine

Additional info :
2.6.32-362.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.358.el6_4.bz949135.v2.x86_64
package qemu-kvm is not installed
seabios-0.6.1.2-26.el6.x86_64

Based on comment above ,this issue has been fixed already ,move status to verified .
Comment 59 Alon Levy 2013-04-22 06:20:36 EDT
*** Bug 915846 has been marked as a duplicate of this bug. ***
Comment 60 Colin Coe 2013-05-01 00:52:30 EDT
Is QA reviewing this?  Any ideas when RHEV will get this update?

Thanks
Comment 64 Ademar Reis 2013-05-15 14:31:34 EDT
*** Bug 949135 has been marked as a duplicate of this bug. ***
Comment 68 Mike Cao 2013-05-20 03:55:11 EDT
althrough This bug has been fixed ,But the patch introduced 2 regressions :
https://bugzilla.redhat.com/show_bug.cgi?id=956936 and https://bugzilla.redhat.com/show_bug.cgi?id=953812
Comment 69 Colin Coe 2013-06-04 19:11:49 EDT
How's this progressing?
Comment 70 Mike Cao 2013-06-04 20:44:32 EDT
(In reply to Colin Coe from comment #69)
> How's this progressing?

A Build w/ all the regression bug fixed just come out yesterday night .
QE will working on it whether it is a Pass build to push out
Comment 73 Mike Cao 2013-06-17 02:56:22 EDT
(In reply to Mike Cao from comment #70)
> (In reply to Colin Coe from comment #69)
> > How's this progressing?
> 
> A Build w/ all the regression bug fixed just come out yesterday night .
> QE will working on it whether it is a Pass build to push out

WHQL Test of that build is failed ,Developer will work on it .
Comment 74 Colin Coe 2013-06-18 19:41:29 EDT
Can we not revert back to an older version of the driver that does not cause the Windows VM to hang?  This way RHEV users can have stable Windows VMs and the devs can work on the problem in the background without being hounded????

This has being going on for a long time and is our "Number 1" problem affecting the stability of RHEV.  

Thanks

CC
Comment 77 Ronen Hod 2013-06-20 09:25:04 EDT
Roman,

I am not sure about which regressions you refer to.
Anyhow, after fixing several bugs, and some issues with the new WHQL (HCK), we finally have a good version. It passed functional tests and WHQL, and it should be back from MSFT soon (maybe even today).
This is our official version, and it has no known regressions.

Regards, Ronen.
Comment 78 Mike Cao 2013-06-25 03:40:09 EDT
The Driver is digital signed by Microsoft ,We will ship it our via RHN recently.

Mike
Comment 79 Mike Cao 2013-06-27 22:21:47 EDT
*** Bug 968545 has been marked as a duplicate of this bug. ***
Comment 81 IBM Bug Proxy 2013-06-27 22:25:18 EDT
Created attachment 766387 [details]
loader.exe error when try to reproduce this issue
Comment 82 IBM Bug Proxy 2013-06-27 22:25:39 EDT
Created attachment 766388 [details]
Add HyperV relaxed feature hook for VDSM




This is the hook for VDSM to add hv_relaxed to the domain XML during the VM creation.

Place the attached file into /usr/libexec/vdsm/hooks/before_vm_start

and all newly start VMs will have the flag added
Comment 85 IBM Bug Proxy 2013-06-28 07:15:03 EDT
With respect to the latest comments from IBM Bug Proxy, I'd like to make you aware of a fact that we already found issue that is mentioned in the comments.
(Meaning possibility of spice-server thread that executes infinite loop.).

Please bear in mind that the issue is being solved separately in bug https://bugzilla.redhat.com/show_bug.cgi?id=964136
Comment 88 IBM Bug Proxy 2013-07-02 14:45:25 EDT
Description of problem: When many Windows 7 VMs (70 or more) are running on a single hypervisor, CPU usage rises to 100% and libvirtd appears to be using 100% of the CPU. In addition, attempting to connect to already-running VMs shows the last thing that was on the screen, but you can't ping or interact with the VM. New VMs started after this run normally.

RHEV 3.1, RHEL 6.4 Hypervisor
vdsm-4.10.2-1.9.el6ev.x86_64
libvirt-0.10.2-18.el6_4.4.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6_4.3.x86_64

Very

1. Create Pool of Win7 x86_64 VMs with test load script
2. Start 70 or more VMs from pool
3. Keep starting VMs until CPU usage maxes out on hypervisor

Actual results:
CPU usage increases, VMs won't respond over Spice. Libvirtd uses 100% CPU

Expected results:
Either reduced VM performance or a warning of overload

There's one suspect thing in libvirtd's strace: thread 6635 is frequently calling gettid(), which could mean the thread is in an infinite loop or something similar. However, the strace was taken after libvirtd.log files were gathered in sosreport (thread 6635 cannot be found in any of the provided libvirtd logs, which makes it impossible to check what that thread was doing. And since we don't know when the bug happened before sosreport was taken, we have no indication where to look at within the 45GB of libvirtd logs. Anyway, except for the frequently called gettid(), there does not seem to be anything unusual in either the strace or log files; vdsm is just asking for block device statistics every few seconds, libvirtd is passing those requests to running qemu-kvm processes and they are responding to those requests.

I need to see the sosreport (less the probably not useful verbose libvirtd log). It might also be helpful if you exported the vm definitions from RHEV-M. Please put them in a password-protected zip file and send me a link to get it from a people.redhat.com ftp account. -Scott

To be clearer, I meant: Please put the sosreport and, if possible, the xml in a zip file....

Updated Summary of the Issue:
1. 2 rhel-based RHEV hosts running the latest 6.4 and latest vdsm/libvirt/qemu
2. Running a load test whereby 5 VMs at a time are booted up, sysprepped, and set to run a load script attached privately to this bugzilla
3. Once a certain number of VMs are booted up (random number), while the newest 5 are booting/sysprepping, **ALL** of the currently running VMs go into what can only be described as a hung state. You can connect to their Spice console but you cannot move the mouse or interact with it in any way and you can't ping them either.
4. Oddly enough, after the "problem" happens and all the VMs appear to hang, new VMs can be started without issue. They work just fine.
5. Even the qemu-kvm processes for the VMs that are "hung" still seem to properly respond to queries from libvirt.

Comparison system:
In their other environment, called DEV, which is running HP hardware and the same versions of RHEL/RHEV. They ran this same load test and got up to 100 VMs running without issue. They arbitrarily stopped at 100 VMs because the issue has been seen in the other environment, on more powerful hardware, at 50 and 70 VMs.

Question to SEG/engineering:
--> What targeted information needs to be collected for the upcoming test on Monday?

We will be watching the system live until they start up enough VMs for the problem to happen. At that time, we will note the exact timestamp to look at in the logs and gather any additional debugging info we need.

Given that libvirt seems to be working fine (except for consuming CPU) and processing API calls and even QEMU is responding to QMP commands from libvirt, I'm moving this bug to qemu-kvm to further investigate why guests seem to be hung. If libvirt team can do anything to help with this investigation, don't hesitate to contact us.

I think I see what's happening here.  We are seeing the following error message being reported in the libvirt logs:

virtio_ioport_write unexpected address 0x13 value 0

This message continuously prints to the libvirt logs.  Digging in a little further, I found:

https://bugs.launchpad.net/qemu/+bug/990364

Which contains:

"It doesn't look like as a vritio-win driver problem.
you get the following message
"virtio_ioport_write: unexpected address 0x13 value 0x1"
because netkvm driver triggers BSOD event, which happened in
different stack, and then kills the hosting QEMU prccess
by writing to ISR register."

Now writing to the ISR register is *not* a supported way of exiting QEMU.  I don't know why netkvm is making this assumption.  I cannot find any instance in upstream QEMU where we would exit on an ISR write so I suspect this was a RHEL specific patch that was dropped in 6.4 for some reason.

Anyway, instead of exiting the guest, the guest is continously writing to the ISR and making no progress.  All guests consume 100% which in turn causes libvirt/VDSM to consume 100% as they try to get work done.

To put Anthony's comment in context:  A number (maybe all) of the Windows guests were trying to stop on a kernel panic ("BSOD"). Rather than let them evaporate, the virtio driver was trying to use an unsupported behavior of Qemu such that the qemu process would trap if the driver did the illegal write that Anthony mentions.  So, there are two issues: The one that should be tracked in this bug is the defect in the virtio-win package that  uses this unsupported qemu behavior.  Red Hat will need to pursue the underlying problem in a separate discussion. My speculation is that this may be the well-known windows timer problem, so it may be necessary to find a way to let RHEV spin up guests with the relaxed timer feature enabled.  -Scott Garfinkle
The virtio-win team will look at it.
Unfortunately, the netKVM people are on PTO for a few days, so be patient with us.
Since we fixed many bugs in the netKVM drivers, I would like to give you more recent drivers for testing. More later.

(In reply to Ronen Hod from comment #24)
> The virtio-win team will look at it.
> Unfortunately, the netKVM people are on PTO for a few days, so be patient
> with us.
> Since we fixed many bugs in the netKVM drivers, I would like to give you
> more recent drivers for testing. More later.

May I know virtio-win vioser driver version ?

Could it duplicated w/ https://bugzilla.redhat.com/show_bug.cgi?id=823818 ?

It should be fixed in build 59
https://brewweb.devel.redhat.com/buildinfo?buildID=267282

(In reply to Vadim Rozenfeld from comment #26)
> It should be fixed in build 59
> https://brewweb.devel.redhat.com/buildinfo?buildID=267282

https://bugzilla.redhat.com/show_bug.cgi?id=921200 ,right ?

(In reply to Mike Cao from comment #27)
> (In reply to Vadim Rozenfeld from comment #26)
> > It should be fixed in build 59
> > https://brewweb.devel.redhat.com/buildinfo?buildID=267282
> https://bugzilla.redhat.com/show_bug.cgi?id=921200 ,right ?

yes, it is.

(In reply to Anthony Liguori from comment #21)
> I think I see what's happening here.  We are seeing the following error
> message being reported in the libvirt logs:
> virtio_ioport_write unexpected address 0x13 value 0
> This message continuously prints to the libvirt logs.  Digging in a little
> further, I found:
> https://bugs.launchpad.net/qemu/+bug/990364
> Which contains:
>   "It doesn't look like as a vritio-win driver problem.
>    you get the following message
>    "virtio_ioport_write: unexpected address 0x13 value 0x1"
>    because netkvm driver triggers BSOD event, which happened in
>    different stack, and then kills the hosting QEMU prccess
>    by writing to ISR register."
> Now writing to the ISR register is *not* a supported way of exiting QEMU.  I
> don't know why netkvm is making this assumption.  I cannot find any instance
> in upstream QEMU where we would exit on an ISR write so I suspect this was a
> RHEL specific patch that was dropped in 6.4 for some reason.

Indeed, it was a debug code for this BSOD. Apparently not a good one.
We already removed it a while ago.
We have a well tested build without this issue, and we would like you to try it out. The BSOD itself is another issue, which might be fixed too by now.

> Anyway, instead of exiting the guest, the guest is continously writing to
> the ISR and making no progress.  All guests consume 100% which in turn
> causes libvirt/VDSM to consume 100% as they try to get work done.

Sorry, we did fix it, but the stable build that I was thinking of (60) was taken from a branch that missed this bug fix.

(In reply to Ronen Hod from comment #30)
> Sorry, we did fix it, but the stable build that I was thinking of (60) was
> taken from a branch that missed this bug fix.

Thanks Ronen. But to reconfirm, we don't have a stable build yet with this fix included to provide the Customer for testing. Is that correct?

fwiw, the urgent problem that we are trying to get to with the customer is the cause of the underlying BSOD. I don't see this bug here as the main show.

(In reply to Ronen Hod from comment #29)
> Indeed, it was a debug code for this BSOD. Apparently not a good one.
> We already removed it a while ago.
> We have a well tested build without this issue, and we would like you to try
> it out. The BSOD itself is another issue, which might be fixed too by now.

So what you're saying is in the next build, both of these issues are expected to be resolved?

1) Is there another BZ we can look at that will explain the expected fix for the BSOD issue?
2) Is there an ETA on when we can expect a stable build to provide to the customer?

(In reply to seg@us.ibm.com from comment #32)
> fwiw, the urgent problem that we are trying to get to with the customer is
> the cause of the underlying BSOD. I don't see this bug here as the main show.

Could you upload the crash dump file?

Personally, I am not sure whether the guest created a crash dump. Anyway, I assume your request is addressed to your TAM, who is the owner of this problem, and is working with the customer.  Christy and Anthony and I just stepped in briefly to lend a hand.

BTW, that said, it would not surprise me if there were no crash dumps. If one or more exist (and presumably you only need one), then Bryan or Anitha or someone can get it. If not, you might have to provide a special build of the virtio driver without the bug noted above (and note that this is NOT a production environment). Anyway, I do again think it is worthwhile to pursue debugging the underlying problem in a separate bug, just to keep the problems separate.

(In reply to seg@us.ibm.com from comment #37)
> BTW, that said, it would not surprise me if there were no crash dumps. If
> one or more exist (and presumably you only need one), then Bryan or Anitha
> or someone can get it. If not, you might have to provide a special build of
> the virtio driver without the bug noted above (and note that this is NOT a
> production environment). Anyway, I do again think it is worthwhile to pursue
> debugging the underlying problem in a separate bug, just to keep the
> problems separate.

From the bug reporter ,I did see guest stuck instead of hang ,Isn't it ?
Could you provide virtio-serial version & netkvm version in the guest ?

(In reply to Mike Cao from comment #38)
> From the bug reporter ,I did see guest stuck instead of hang ,Isn't it ?
> Could you provide virtio-serial version & netkvm version in the guest ?

Mike, the virtio-serial version is 52.63.103.3000 (7/3/2012) and the netkvm version is 60.64.104.4900 from the RHEV Tools ISO 3.1-12

(In reply to Bryan Yount from comment #39)
> (In reply to Mike Cao from comment #38)
> > From the bug reporter ,I did see guest stuck instead of hang ,Isn't it ?
> > Could you provide virtio-serial version & netkvm version in the guest ?
> Mike, the virtio-serial version is 52.63.103.3000 (7/3/2012) and the netkvm
> version is 60.64.104.4900 from the RHEV Tools ISO 3.1-12

Hi,Bryan

I want to confirm whether guest hang or BSOD occurs ?
Could you generate DMP file by NMI if guest hang to make developer check what cause the guest hang ?

For now, there is no need to open another bug. We give it high attention as is.
I am not sure about where we stand with the BSOD. Our QE will try to verify.
At first sight it looks like a high load situation, where many guests did not get enough CPU resources, so Windows watchdogs expired, but I wouldn't like to speculate. hv_relaxed flag should solve this specific problem.
Our engineers are looking into it, but the engineers that usually work on netKVM are on a short vacation, so please be more patient than usual.

Thanks, Ronen.

I agree with you that it seems like the relaxed timer is a likely suspect. Actually, I've been saying that for about two weeks, now.  Since there is no way to edit the guests' XML to enable this, I had suggested to Bryan and Anitha last week that you might want to implement the appropriate VDSM hook to test the theory.  We are happy to assist there, by the way, but we have no experience in implementing such hooks.

For now,QE cannot reproduce this issue,here is my steps:
1.boot several(about 5,I will add more if necessary) win7-64 guest in rhel6.4 host with virtio-win-1.6.3:
/usr/libexec/qemu-kvm   \
-drive file=win7-64-vm1.qcow2,if=none,cache=unsafe,media=disk,format=qcow2,id=drive-ide0-0-1 \
-device ide-drive,id=ide1,drive=drive-ide0-0-1,bus=ide.1,unit=1,bootindex=1 \
-monitor stdio \
-vnc :1 -vga cirrus \
-usb \
-name win7-64-vm1 \
-device usb-tablet,id=tablet1 \
-boot menu=on \
-chardev file,path=/root/console.log,id=serial1 \
-device isa-serial,chardev=serial1,id=s1 \
-cpu Penryn,+sep  -M pc \
-smp 2,cores=1,threads=1,sockets=2 -m 2G \
-enable-kvm \
-device virtio-serial-pci,id=virtio-serial0,max_ports=16 -chardev socket,path=/tmp/tt0,server,nowait,id=channel0 -device virtserialport,chardev=channel0,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port0 \
-netdev tap,sndbuf=0,id=hostnet1,vhost=on,script=/etc/qemu-ifup,downscript=no -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:52:12:16:54:48,bus=pci.0 \
-cdrom /usr/share/virtio-win/virtio-win-1.6.3.iso \
-fdb /usr/share/virtio-win/virtio-win-1.6.3_amd64.vfd

2.in the guests try to run the attachment: "their Windows VM load test tool" workload_local_inst.exe"

But actuall result is that when I run workload_local_inst.exe in guest,it start to install and restart(during guest rebooting,the CPU usage rises to 100%).
After restart,guest prompt error?I will attach the screenshoot??
"line 3644 (File "c:\vdi_density\Loader.exe"):Error:Variable must be of type ?Object?".
It seems that the loader.exe cannot run correctly.
Allan Voss,could you  mprovidee if there is anything I should pay attention when use the load test tool,that will make the reroduce more smoothly,thx!

lijin,
Although the customer is not running into memory overcommit. I would recommend that you try to make the host swap. This will introduce large delays and allow us to check the hv_relaxed theory.
Amit Shah gave QE a "make_system_swap" utility some half a year ago. You can try to use it.

BTW, a mention that hv_relaxed helps can be found in https://bugzilla.redhat.com/show_bug.cgi?id=801196#c88

Bryan, I leave the VDSM hook request from comment 42 to you and the RHEV guys.

Lijin,
Amit's code can be found at https://gitorious.org/make-system-swap

Bryan,

The thing that helps the most for BSOD analysis is a dump file. Alternatively, if dump files are not generated (probably not the case), try to connect debugger to guest.
Please try to get one.

(In reply to Ronen Hod from comment #45)
> lijin,
> Although the customer is not running into memory overcommit. I would
> recommend that you try to make the host swap. This will introduce large
> delays and allow us to check the hv_relaxed theory.
> Amit Shah gave QE a "make_system_swap" utility some half a year ago. You can
> try to use it.

How sure are we that this will give us the data we need? Is this something we're trying to see if we can get some data until we get the updated netkvm drivers?

> Bryan, I leave the VDSM hook request from comment 42 to you and the RHEV
> guys.

Support does not typically write hooks for customers. This is something we would need engineering assistance on.

(In reply to Ronen Hod from comment #47)
> The thing that helps the most for BSOD analysis is a dump file.
> Alternatively, if dump files are not generated (probably not the case), try
> to connect debugger to guest.
> Please try to get one.

I assume you're talking about using WinDBG and attaching it to the running VM? In normal KVM this would be doable, but I am told that this is nearly impossible to do in a RHEV environment. Thoughts?

(In reply to Bryan Yount from comment #48)
> (In reply to Ronen Hod from comment #45)
> > lijin,
> > Although the customer is not running into memory overcommit. I would
> > recommend that you try to make the host swap. This will introduce large
> > delays and allow us to check the hv_relaxed theory.
> > Amit Shah gave QE a "make_system_swap" utility some half a year ago. You can
> > try to use it.
> How sure are we that this will give us the data we need? Is this something
> we're trying to see if we can get some data until we get the updated netkvm
> drivers?
> > Bryan, I leave the VDSM hook request from comment 42 to you and the RHEV
> > guys.
> Support does not typically write hooks for customers. This is something we
> would need engineering assistance on.
> (In reply to Ronen Hod from comment #47)
> > The thing that helps the most for BSOD analysis is a dump file.
> > Alternatively, if dump files are not generated (probably not the case), try
> > to connect debugger to guest.
> > Please try to get one.
> I assume you're talking about using WinDBG and attaching it to the running
> VM? In normal KVM this would be doable, but I am told that this is nearly
> impossible to do in a RHEV environment. Thoughts?

NetKvm writes to ISR (address 0x13) only when BSOD happens. It must be a kernel or a mini dump file in the crashed system.
No need in live debugging with WinDbg or adjusting any qemu options at the moment.

(In reply to Bryan Yount from comment #48)
> (In reply to Ronen Hod from comment #45)
> > lijin,
> > Although the customer is not running into memory overcommit. I would
> > recommend that you try to make the host swap. This will introduce large
> > delays and allow us to check the hv_relaxed theory.
> > Amit Shah gave QE a "make_system_swap" utility some half a year ago. You can
> > try to use it.
> How sure are we that this will give us the data we need? Is this something
> we're trying to see if we can get some data until we get the updated netkvm
> drivers?

Since QE failed to reproduce it, I suggest this make_system_swap method.
It is worth trying, since from the report it seems as if several guests are suffering at the same time. Two reasons that I can think of are:
- Shortage of host resources (swapping will trigger it)
- VDSM looping over many guests
Actually, all the signs point towards VDSM. We will fix the drivers. Needinfo to VDSM (danken).
You can try to increase the priority of VDSM even further and see what happens.

> > Bryan, I leave the VDSM hook request from comment 42 to you and the RHEV
> > guys.
> Support does not typically write hooks for customers. This is something we
> would need engineering assistance on.

Danken, can you help?

> (In reply to Ronen Hod from comment #47)
> > The thing that helps the most for BSOD analysis is a dump file.
> > Alternatively, if dump files are not generated (probably not the case), try
> > to connect debugger to guest.
> > Please try to get one.
> I assume you're talking about using WinDBG and attaching it to the running
> VM? In normal KVM this would be doable, but I am told that this is nearly
> impossible to do in a RHEV environment. Thoughts?

WinDBG is the second option. As Vadim said we expect you to find a dump file, and send it to us (no need for WinDBG).

I would love to see vdsm.log when tight-looping has started. A recent fix by Vinzenz http://gerrit.ovirt.org/15393 may be helpful.

We would love to help diagnose the issue using vdsm hook - but I'd need more information about would you like to tweak in the domxml.

(In reply to Dan Kenigsberg from comment #51)
> I would love to see vdsm.log when tight-looping has started. A recent fix by
> Vinzenz http://gerrit.ovirt.org/15393 may be helpful.
> We would love to help diagnose the issue using vdsm hook - but I'd need more
> information about would you like to tweak in the domxml.

At the qemu level, you need to add hv_relaxed to qemu "-cpu" flag, such as "-cpu cpu64-rhel6,hv_relaxed". You can probably "translate" it to XML.

Just to be safe & clear -- does the customer save that file to the hypervisor?

(In reply to Dan Kenigsberg from comment #51)
> I would love to see vdsm.log when tight-looping has started. A recent fix by
> Vinzenz http://gerrit.ovirt.org/15393 may be helpful.
> We would love to help diagnose the issue using vdsm hook - but I'd need more
> information about would you like to tweak in the domxml.

So, I just need to pass the hook to the customer and replicate this issue with it in effect? Or do I need to do this to a particular VM?

I think the point is to install it and see if they can reproduce the problem. The hope is that it makes the problem go away. It may take a while to be sure, since it can take them up to a couple days to reproduce.

Scott, the customer is out of the office for the rest of the week. We can have them implement this as soon as they get back.

Bryan,
Once you have access to the machine, please do not forget to get us the dump/minidump. It is essential for the Windows side of the BSOD analysis.
*** Bug 94771 has been marked as a duplicate of this bug. ***
(In reply to Ronen Hod from comment #58)
> Bryan,
> Once you have access to the machine, please do not forget to get us the
> dump/minidump. It is essential for the Windows side of the BSOD analysis.

Hey Ronen, maybe I'm missing something but without the aforementioned virtio driver update that actually allows the Windows guest to properly crash, we won't have any dumps/minidumps. We need the updated virtio driver if possible first. When do we expect the team to return and provide us a hotfix driver?
Bryan, have you already checked for the dump? It is not clear to me.  It may well be there, depending on the processing sequence.
(In reply to Bryan Yount from comment #60)
> (In reply to Ronen Hod from comment #58)
> > Bryan,
> > Once you have access to the machine, please do not forget to get us the
> > dump/minidump. It is essential for the Windows side of the BSOD analysis.
> Hey Ronen, maybe I'm missing something but without the aforementioned virtio
> driver update that actually allows the Windows guest to properly crash, we
> won't have any dumps/minidumps. We need the updated virtio driver if
> possible first. When do we expect the team to return and provide us a hotfix
> driver?

We should have (mini)dump file in any case, even with the old driver.
(In reply to Bryan Yount from comment #60)
> (In reply to Ronen Hod from comment #58)
> > Bryan,
> > Once you have access to the machine, please do not forget to get us the
> > dump/minidump. It is essential for the Windows side of the BSOD analysis.
> Hey Ronen, maybe I'm missing something but without the aforementioned virtio
> driver update that actually allows the Windows guest to properly crash, we
> won't have any dumps/minidumps. We need the updated virtio driver if
> possible first. When do we expect the team to return and provide us a hotfix
> driver?

I don't understand why are you saying that the driver is not allowing Windows to crash.

The driver writes 0 to the ISR port during crash callback and the Windows should display BSOD (of course by default Windows is set-up to reboot). But in any case you should have at least minidump.

Yan.
Scott, BSOD is mentioned with reference to comment #21. I think this is a bit of a misunderstanding and it could explain why we're not seeing dump files. The customer indicated to me in a phone call today that Windows dump was set up correctly and the VMs in question were persistent VMs.

Based on the error message, "virtio_ioport_write unexpected address 0x13 value 0", there are 2 cases when the driver writes to virtio ISR register:
Driver writes 1 - It is triggered from BSOD callback that is registered from the driver. Windows will BSOD.
Driver writes 0 - it is triggered by a transmit packet submitted to QEMU and not retuned during several seconds (no interrupts from the host for virtio-net device). In all previous cases when we saw such behavior it meant problem in the host networking. Another possible issue - guest is not getting CPU time for a long time.

We are seeing the driver write a 0 which means we won't actually get a BSOD which means we won't get a minidump. So, just to be sure, can you clarify: did you experience an actual BSOD on these VMs or was that just a speculation based on the error?

*New information from the customer*

- This issue happened over the weekend on a host in their DEV environment with only 5 VMs running. A new logcollector will be uploaded soon.
- This problem did not happen on RHEL 6.3 / vdsm 3.0.113.1 which was the previous version installed on the DEV hosts (Pureflex was 6.4 / vdsm-4.10.2-1.9 then upgraded to vdsm-4.10.2-1.13 from the outset).

I have not personally observed a BSOD.
Bryan,

Can you give an update on whether the customer has added that hook?
New LogCollector from DEV environment with only 5 VMs running is available here:
http://spacesphere.usersys.redhat.com/00838592/sosreport-LogCollector-e403705-20130617142641-f15b.tar.xz
Bryan, regardless of whether or not the underlying problem is the relaxed timer problem causing a windows kernel panic, it seems to that you need to  deploy a fixed netkvm driver without the bug to allow you to do the problem determination. I would also suggest that there is no downside, as far as I know, to turning on the relaxed timer property, so you might as well deploy the vdsm hook just in case. If there remains a problem after that, then I'm sure you and the rest of the red hat team will be able to debug it (and we in the LTC will be happy to assist in any way we can).
(In reply to clnperez from comment #67)
> Bryan,
> Can you give an update on whether the customer has added that hook?

We have not yet asked the customer to implement the hook. Based on a discussion with Yan this morning, we have decided not to go that route for the time being. Since we are not seeing BSODs in the Windows guests (the virtio driver is writing a 0, not a 1 to the ISR), implementing the hook might mask the problem so that we'll never get to the bottom of it. We're going to keep the hook on the back burner for now.

In the meantime, we have asked the customer to disable KSM on the hosts and test again. We will grab additional logs from the guest in addition to new logs from the hosts.

(In reply to seg@us.ibm.com from comment #70)
> Bryan, regardless of whether or not the underlying problem is the relaxed
> timer problem causing a windows kernel panic

We're still not sure that we are seeing a kernel panic.

> deploy a fixed netkvm driver without the bug to allow you to do the problem
> determination.

That was based on a previous discussion and with limited knowledge. In their latest crash on DEV, they had only 5 VMs running and did not see this virtio error in the qemu logs.

> I would also suggest that there is no downside, as far as I
> know, to turning on the relaxed timer property, so you might as well deploy
> the vdsm hook just in case.

But it may mask the real issue. Let's try whittling this down 1 by 1. We have an action plan for now and will update when we have new info.
Not sure if it was done already, but here it is:

When next time hang happens kill all VMs but one and run kvm_stat on the host. Look at kvm_inj_virq field. If second number is huge it means that interrupt storm is going on. Run "trace-cmd record -e kvm:kvm_set_irq -e kvm:kvm_msi_set_irq" and then "trace-cmd report" and "info pci" in qemu monitor and provide output of both.
Hi, Guys

Seems I reproduce this issue and I still think this issue is duplicated w/ https://bugzilla.redhat.com/show_bug.cgi?id=823818

1.Start VM with virtio-serial-pci&virtio-net-pci
CLI:/usr/libexec/qemu-kvm -enable-kvm -m 1024 -smp 4,sockets=4,cores=1,threads=1 -name win2k8-R2 -uuid e2eaca3e-e764-f57b-22f0-74f4ab8c4965 -monitor stdio -rtc base=localtime,driftfix=slew -drive file=/home/win7-32-virtio.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=none  -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,script=/etc/qemu-ifup,downscript=no,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=ff:04:00:15:af:6a,bus=pci.0,addr=0x3 -vnc :20 -device virtio-serial-pci,id=serial0 -chardev socket,path=/tmp/tt,server,nowait,id=chardev0 -device virtserialport,id=port0,name=com.redhat.rhevm.vdsm,chardev=chardev0 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=1

2.install virtio-win-prewhql-49 virtio-net driver and virtio-serial driver
3.ping www.google.com -t in a loop in the guest
4.Transferring data from guest to host in a loop (emluate the behavior of RHEV-Agent)
4.1 on the host # nc -U /tmp/tt
4.2 in the guest ,write a 4M text file ,named it as 1.txt
#cat 1.bat
copy 1.txt \\.\Global\.\com.redhat.rhevm.vdsm

Guest hang ,
in the qemu-monitor it shows "virtio_ioport_write: unexpected address 0x13 value 0x0" in a loop
I can not reproduce the "hang* when upgrade virtio-win driver to build 64 .
**Updated summary**

Issue #1 - https://bugzilla.redhat.com/show_bug.cgi?id=970217 - The vdsm listener channel thread, which monitors all of the virtual guests for memory stats and things like ip address, was dying on the host. This thread actually uses the virtio-serial connection for communication and when the listener thread died, it caused...

Issue #2 - https://bugzilla.redhat.com/show_bug.cgi?id=823818 -virtio-serial device does not properly handle an interruption in communication and will hang the guest. Inside a RHEV guest, the virtio-serial device is used for communication from the guest to vdsm on the host. When the vdsm listener thread died, it severed this communication which caused the virtio-serial driver to hang the guest.

So, this explains why we saw the problem on all running VMs all at the same time. The vdsm thread dies, severing the communication to everything running, and the guest(s) hang as a result.

Now, the question is, what to do with this bug? Should we keep it open since it dealt initially with the "virtio_ioport_write unexpected address 0x13 value 0" issue or is that handled elsewhere? This BZ did get pretty comment heavy though so I wouldn't be opposed to closing it and opening something separate for the ioport issue (unless there's already one open).
*** This bug has been marked as a duplicate of bug 823818 ***
Comment 89 IBM Bug Proxy 2013-07-02 18:15:02 EDT
IBM, please do not post comments that originated in another private bugzilla in this public bugzilla. If you do, please post them as private comments as they may contain sensitive customer data. Marking comment #80 as private.

Additionally, the aforementioned hook is unrelated to this issue or Bug 968545. If you would like to raise the relaxed timer issue, please do so in a new BZ.
Comment 90 Mike Cao 2013-07-05 04:07:01 EDT
Hi, All

virtio-win-1.6.5-5.norach.el6 is shipped out via RHN 
Pls check the new drivers whether solved you issue 

Thanks
Mike
Comment 91 Bryan Yount 2013-07-09 19:50:28 EDT
Here is the errata for anyone who needs it:
https://rhn.redhat.com/errata/RHBA-2013-1016.html
Comment 93 errata-xmlrpc 2013-11-21 18:56:40 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1729.html

Note You need to log in before you can comment on or make changes to this bug.