RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1367251 - [virtio-win][netkvm] Whql job "2c_Mini6RSSSendRecv (Multi-Group Win8+)" fails on win10-64
Summary: [virtio-win][netkvm] Whql job "2c_Mini6RSSSendRecv (Multi-Group Win8+)" fail...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virtio-win
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Yvugenfi@redhat.com
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-16 03:52 UTC by Peixiu Hou
Modified: 2019-05-05 09:18 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-28 07:58:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
125NICW10D64CB7 HLK log (3.55 MB, application/zip)
2016-08-16 03:52 UTC, Peixiu Hou
no flags Details
126NICW10D64 HLK package (3.66 MB, application/zip)
2016-08-22 06:59 UTC, Peixiu Hou
no flags Details

Description Peixiu Hou 2016-08-16 03:52:15 UTC
Created attachment 1191052 [details]
125NICW10D64CB7 HLK log

Description of problem:
On win10-64 guest, first run  whql job "2c_Mini6RSSSendRecv (Multi-Group Win8+)", the support guest occurred BSOD, second and third time to run job, failed with error message: 
1. LibProtocolDriver: Install protocol driver failed.
2. Unable to create Library Open object!
3. Failed to create open on Test adapter
4. Unable to create additional receiving opens

Version-Release number of selected component (if applicable):
kernel-3.10.0-481.el7.x86_64
qemu-kvm-rhev-2.6.0-17.el7.x86_64
virtio-win-prewhql-125

How reproducible:
100%

Steps to Reproduce:
1. Boot a client guest:
/usr/libexec/qemu-kvm -name 125NICW10D64CB7 -enable-kvm -m 3G -smp 8 -uuid d7bccbfc-25b0-46ba-8d3e-40752dfe5865 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/tmp/125NICW10D64CB7,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -boot order=cd,menu=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=125NICW10D64CB7,if=none,id=drive-ide0-0-0,format=raw,serial=mike_cao,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=en_windows_10_enterprise_version_1607_updated_jul_2016_x64_dvd_9054264.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=125NICW10D64CB7.vfd,if=none,id=drive-fdc0-0-0,format=raw,cache=none -global isa-fdc.driveA=drive-fdc0-0-0 -netdev tap,script=/etc/qemu-ifup1,downscript=no,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=00:52:56:18:16:0a,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=isa_serial0 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -vga cirrus -netdev tap,script=/etc/qemu-ifup-private,downscript=no,id=hostnet1,vhost=on,queues=8 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:52:22:08:3d:66,bus=pci.0,mq=on,vectors=18,disable-legacy=off,disable-modern=off
2. Boot a support guest:
/usr/libexec/qemu-kvm -name 125NICW10D64SB7 -enable-kvm -m 3G -smp 8 -uuid 63747ddf-146b-4546-8385-301c7ed09231 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/tmp/125NICW10D64SB7,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -boot order=cd,menu=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=125NICW10D64SB7,if=none,id=drive-ide0-0-0,format=raw,serial=mike_cao,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=en_windows_10_enterprise_version_1607_updated_jul_2016_x64_dvd_9054264.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=125NICW10D64SB7.vfd,if=none,id=drive-fdc0-0-0,format=raw,cache=none -global isa-fdc.driveA=drive-fdc0-0-0 -netdev tap,script=/etc/qemu-ifup1,downscript=no,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=00:52:6d:3d:82:29,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=isa_serial0 -device usb-tablet,id=input0 -vnc 0.0.0.0:1 -vga cirrus -netdev tap,script=/etc/qemu-ifup-private,downscript=no,id=hostnet1,vhost=on,queues=8 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:52:73:62:7b:34,bus=pci.0,mq=on,vectors=18,disable-legacy=off,disable-modern=off
3. Run job "2c_Mini6RSSSendRecv (Multi-Group Win8+)"

Actual results:
Failed

Expected results:
Passed

Additional info:
1. Job 2c_Mini6RSSSendRecv (Multi-Group Win8+) passed on virtio-win-prewhq 117
2. HLK log as attachment

Comment 2 Yvugenfi@redhat.com 2016-08-16 08:54:40 UTC
Please provide the dump file. 

On the clean image please configure the following - https://blogs.msdn.microsoft.com/microsoft_apgc_hardware_developer_support_team/2013/11/09/how-to-setup-to-collect-memory-dump/ , Please make sure that AlwaysKeepMemoryDump is configured.


If you run for the second time and the failure is different - please check that there is no protocol driver installed on the test adapter:

Some tests (especially MPE) may leave a lot of NDISTest<*>Prot and MPE<*>Instance protocols bound to test devices. This renders all further tests failing due to abnormal test script termination. Solution to this problem is to delete all such protocol instances from test/support devices via "Network device properties".

Comment 3 Peixiu Hou 2016-08-18 04:38:52 UTC
Hi Yan, 

I retested this case with virtio-win-prwhql-126 on the new installed image, it's remain fail. Before running job, both new systems have not NDISTest<*>Prot. Failed error message as follows:

1. LibProtocolDriver: Install protocol driver failed.
2. Unable to create Library Open object!
3. Failed to create open on Test adapter
4. Unable to create additional receiving opens

I run this case second time, the support server occurred BSOD, and after disabled all NDISTest<*>Prot, run third time, it's failed with upper error message, no BSOD occurred. For the support server I configured MemoryDump
settings, but remain does not get the dump file~~


Best Regards~
Peixiu Hou

Comment 4 Yvugenfi@redhat.com 2016-08-18 08:03:00 UTC
(In reply to Peixiu Hou from comment #3)
> Hi Yan, 
> 
> I retested this case with virtio-win-prwhql-126 on the new installed image,
> it's remain fail. Before running job, both new systems have not
> NDISTest<*>Prot. Failed error message as follows:
> 
> 1. LibProtocolDriver: Install protocol driver failed.
> 2. Unable to create Library Open object!
> 3. Failed to create open on Test adapter
> 4. Unable to create additional receiving opens
> 
> I run this case second time, the support server occurred BSOD, and after
> disabled all NDISTest<*>Prot, run third time, it's failed with upper error
> message, no BSOD occurred. For the support server I configured MemoryDump
> settings, but remain does not get the dump file~~
> 
> 
> Best Regards~
> Peixiu Hou


Please gather crash dump from support server. Without it I cannot debug the failure.

Also you can try and reduce amount of vCPUs for the VM running the server side.

Comment 5 Peixiu Hou 2016-08-20 02:34:17 UTC
Hi Yan,

I retested this case many times on virtio-win-prwhql-126 on a new installed image: 

1. Tried to reduce vCPUs to 2, with virtio-1.0, the job failed, failed with comment#0 error, and didn't hit bsod.
2. Tried to reduce vCPUs to 2, without virtio-1.0, the job failed, failed with comment#0 error, and didn't hit bsod.
3. Tried with single queue, the job also failed, failed with comment#0 error, and didn't hit bsod.


Best Regards~
Peixiu Hou

Comment 6 Yvugenfi@redhat.com 2016-08-20 05:39:07 UTC
(In reply to Peixiu Hou from comment #5)
> Hi Yan,
> 
> I retested this case many times on virtio-win-prwhql-126 on a new installed
> image: 
> 
> 1. Tried to reduce vCPUs to 2, with virtio-1.0, the job failed, failed with
> comment#0 error, and didn't hit bsod.
> 2. Tried to reduce vCPUs to 2, without virtio-1.0, the job failed, failed
> with comment#0 error, and didn't hit bsod.
> 3. Tried with single queue, the job also failed, failed with comment#0
> error, and didn't hit bsod.
> 
> 
> Best Regards~
> Peixiu Hou

Was it a clean VM image?

Comment 7 Peixiu Hou 2016-08-22 02:20:38 UTC
(In reply to Yan Vugenfirer from comment #6)
> (In reply to Peixiu Hou from comment #5)
> > Hi Yan,
> > 
> > I retested this case many times on virtio-win-prwhql-126 on a new installed
> > image: 
> > 
> > 1. Tried to reduce vCPUs to 2, with virtio-1.0, the job failed, failed with
> > comment#0 error, and didn't hit bsod.
> > 2. Tried to reduce vCPUs to 2, without virtio-1.0, the job failed, failed
> > with comment#0 error, and didn't hit bsod.
> > 3. Tried with single queue, the job also failed, failed with comment#0
> > error, and didn't hit bsod.
> > 
> > 
> > Best Regards~
> > Peixiu Hou
> 
> Was it a clean VM image?

Yes, it is~

Comment 9 Peixiu Hou 2016-08-22 06:59:18 UTC
Created attachment 1192779 [details]
126NICW10D64 HLK package

Hi Yan, 

A HLKX log You can refer to the attachment, only ran one job 2c_Mini6RSSSendRecv (Multi-Group Win8+) on this vm.


Best Regards~
Peixiu Hou

Comment 12 Peixiu Hou 2019-05-05 05:12:06 UTC
Hi Yan,

Hit the same issue on win10-64 1903 guest, the job"2c_Mini6RSSSendRecv (Multi-Group Win8+)" failed with error as commment#0, it cannot be filter passed now.
I want to confirm with you if manual errata 5556(mentioned on comment#11) can cover win10 1903 this test?

Thanks a lot~
Peixiu


Note You need to log in before you can comment on or make changes to this bug.