Bug 1270450

Summary: [virtio-win][netkvm]Netkvm status is improper on windows 2012R2 with multiple cards installed
Product: Red Hat Enterprise Linux 7 Reporter: Min Deng <mdeng>
Component: virtio-winAssignee: Yvugenfi <yvugenfi>
virtio-win sub component: virtio-win-prewhql QA Contact: Virtualization Bugs <virt-bugs>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ailan, ddepaula, lijin, lmiksik, mdeng, phou, qzhang, wyu, yvugenfi
Version: 7.3   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-10 06:28:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1401400, 1473046    
Attachments:
Description Flags
Screenshot
none
The log of the driver load failure
none
build 112_with mq
none
build 112_without mq
none
dbgview and eventviewer log for win10-32 muliti nic none

Description Min Deng 2015-10-10 07:33:30 UTC
Created attachment 1081532 [details]
Screenshot

Description of problem:
Netkvm status is improper on windows 2012R2 with multiple cards installed

Version-Release number of selected component (if applicable):
virtio-win-prewhql-0.1-110
kernel-3.10.0-317.el7.x86_64
qemu-kvm-rhev-2.3.0-24.el7.x86_64

How reproducible:
3/3

Steps to Reproduce:
1.boot up guest with above cli
  /usr/libexec/qemu-kvm -M pc-i440fx-rhel7.2.0 -vga qxl -spice disable-ticketing,port=5931 -enable-kvm -m 4G -smp 4,sockets=2,cores=2,threads=1,maxcpus=240 -name rhel7 -uuid 745fe449-aac8-29f1-0c2d-5042a707263b -drive file=win2012R21010.raw,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet0,id=virtio-net-pci0,mac=4e:63:28:bc:b1:25 -monitor stdio -serial unix:/tmp/tty0,server,nowait -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet1,id=virtio-net-pci1,mac=4e:63:28:bc:b1:01 -netdev tap,id=hostnet2,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet2,id=virtio-net-pci2,mac=4e:63:28:bc:b1:02 -netdev tap,id=hostnet3,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet3,id=virtio-net-pci3,mac=4e:63:28:bc:b1:03 -netdev tap,id=hostnet4,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet4,id=virtio-net-pci4,mac=4e:63:28:bc:b1:04 -netdev tap,id=hostnet5,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet5,id=virtio-net-pci5,mac=4e:63:28:bc:b1:05 -netdev tap,id=hostnet6,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet6,id=virtio-net-pci6,mac=4e:63:28:bc:b1:06 -netdev tap,id=hostnet7,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet7,id=virtio-net-pci7,mac=4e:63:28:bc:b1:07 -netdev tap,id=hostnet8,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet8,id=virtio-net-pci8,mac=4e:63:28:bc:b1:08 -netdev tap,id=hostnet9,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet9,id=virtio-net-pci9,mac=4e:63:28:bc:b1:09 -netdev tap,id=hostnet10,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet10,id=virtio-net-pci10,mac=4e:63:28:bc:b1:0A -netdev tap,id=hostnet11,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet11,id=virtio-net-pci11,mac=4e:63:28:bc:b1:0B -netdev tap,id=hostnet12,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet12,id=virtio-net-pci12,mac=4e:63:28:bc:b1:0C -netdev tap,id=hostnet13,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet13,id=virtio-net-pci13,mac=4e:63:28:bc:b1:0D -netdev tap,id=hostnet14,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet14,id=virtio-net-pci14,mac=4e:63:28:bc:b1:0E -netdev tap,id=hostnet15,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet15,id=virtio-net-pci15,mac=4e:63:28:bc:b1:0F -netdev tap,id=hostnet16,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet16,id=virtio-net-pci16,mac=4e:63:28:bc:b1:10 -netdev tap,id=hostnet17,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet17,id=virtio-net-pci17,mac=4e:63:28:bc:b1:11 -netdev tap,id=hostnet18,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet18,id=virtio-net-pci18,mac=4e:63:28:bc:b1:12 -netdev tap,id=hostnet19,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet19,id=virtio-net-pci19,mac=4e:63:28:bc:b1:13 -netdev tap,id=hostnet20,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet20,id=virtio-net-pci20,mac=4e:63:28:bc:b1:14 -netdev tap,id=hostnet21,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet21,id=virtio-net-pci21,mac=4e:63:28:bc:b1:15 -netdev tap,id=hostnet22,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet22,id=virtio-net-pci22,mac=4e:63:28:bc:b1:16 -netdev tap,id=hostnet23,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet23,id=virtio-net-pci23,mac=4e:63:28:bc:b1:17 -netdev tap,id=hostnet24,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet24,id=virtio-net-pci24,mac=4e:63:28:bc:b1:18 -netdev tap,id=hostnet25,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet25,id=virtio-net-pci25,mac=4e:63:28:bc:b1:19 -netdev tap,id=hostnet26,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet26,id=virtio-net-pci26,mac=4e:63:28:bc:b1:1A -netdev tap,id=hostnet27,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet27,id=virtio-net-pci27,mac=4e:63:28:bc:b1:1B -netdev tap,id=hostnet28,script=/etc/qemu-ifup,vhost=on,queues=4 -device virtio-net-pci,mq=on,vectors=10,netdev=hostnet28,id=virtio-net-pci28,mac=4e:63:28:bc:b1:1C

2.Login to guest - > open Device Manager


Actual results:
Four of them netkvm'status are improper and please have a look on device manager

Expected results:
All netkvm show correctly status

Additional info:

Comment 5 Yvugenfi@redhat.com 2016-01-13 08:40:29 UTC
Created attachment 1114310 [details]
The log of the driver load failure

Comment 7 Peixiu Hou 2016-01-14 03:37:51 UTC
Created attachment 1114645 [details]
build 112_with mq

Comment 8 Peixiu Hou 2016-01-14 03:38:54 UTC
Created attachment 1114647 [details]
build 112_without mq

Comment 9 Yvugenfi@redhat.com 2016-04-18 08:12:33 UTC
*** Bug 1202609 has been marked as a duplicate of this bug. ***

Comment 10 Yu Wang 2016-09-07 02:38:06 UTC
Win10-32 hit the same issue w/ multi-queue and virtio1.0 device

Thanks
Yu Wang

Comment 11 Yvugenfi@redhat.com 2017-05-16 18:54:27 UTC
Please test with latest build (136). We did a fix that should handle the case when there is not enough MSI vectors for the device.

Comment 12 lijin 2017-05-17 03:10:08 UTC
win10-32 still hit this issue with 137 build.

Comment 13 ybendito 2017-07-12 13:16:52 UTC
(In reply to lijin from comment #12)
> win10-32 still hit this issue with 137 build.

I do not succeed to reproduce the problem despite I try to do everything as close to description as possible. Please help us to understand the problem:

Please use build 140

1. Run the VM and reach the problematic state.
2. Run DbgView application as administrator, turn on kernel logs 
3. Locate in device manager one of virtio-net devices WITHOUT yellow bang. Disable it, then enable it (succeeded?).
4. Locate in device manager one of virtio-net devices WITH yellow bang. Disable it, then try enabling it (succeeded?)
5. Locate in device manager one of virtio-net devices WITHOUT yellow bang. Disable it, leave disabled.
6. Locate in device manager one of virtio-net devices WITH yellow bang. Disable it, then try enabling it (succeeded?).
7. Save the log of DbgView
8. Add file C:\Windows\System32\winevt\Logs\System.evtx
9. Zip all together and attach to the BZ

Comment 15 lijin 2017-07-13 13:31:15 UTC
Created attachment 1297617 [details]
dbgview and eventviewer log for win10-32 muliti nic

Comment 17 Yu Wang 2017-11-23 02:02:05 UTC
reproduce with build 137 and verify with build 143 on win2012R2

steps as comment#0

Above all this bug has been fixed, change status to verified.

Thanks
Yu Wang

Comment 19 errata-xmlrpc 2018-04-10 06:28:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0657