RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 811910 - [virtio-win][viostor] BSOD when doing S3 after upgrade virtio-blk driver to the latest build virtio-win-prewhql-0.1-25 on win7-32.
Summary: [virtio-win][viostor] BSOD when doing S3 after upgrade virtio-blk driver to t...
Keywords:
Status: CLOSED DUPLICATE of bug 811161
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virtio-win
Version: 6.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Vadim Rozenfeld
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-04-12 10:02 UTC by dawu
Modified: 2012-04-16 08:07 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-04-14 20:04:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
win7-32-blk-upgrad-BSOD (18.02 KB, image/png)
2012-04-12 10:02 UTC, dawu
no flags Details

Description dawu 2012-04-12 10:02:55 UTC
Created attachment 577025 [details]
win7-32-blk-upgrad-BSOD

Description of problem:
BSOD when doing S3 after upgrade virtio-blk driver from virtio-win-rewhql-0.1-16 to the latest build virtio-win-prewhql-0.1-25 on win7-32, I also tried from virtio-win-rewhql-0.1-24 to 25, the same thing happened.

Version-Release number of selected component (if applicable):
kernel-2.6.32-259.el6.x86_64
qemu-kvm-0.12.1.2-2.270.el6.x86_64
virtio-win-rewhql-0.1-25
seabios-0.6.1.2-16.el6.x86_64 


How reproducible:
always

Steps to Reproduce:
1.Start guest with old driver virtio-win-rewhql-0.1-16 or virtio-win-rewhql-0.1-24:
   /usr/libexec/qemu-kvm -m 2G -smp 2 -cpu cpu64-rhel6,+x2apic,family=0xf -usb -device usb-tablet -drive file=win7-32-0326.raw,format=raw,if=none,id=drive-virtio0,boot=on,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-virtio0,id=virtio-blk-pci0,bootindex=1 -netdev tap,sndbuf=0,id=hostnet0,script=/etc/qemu-ifup0,downscript=no -device e1000,netdev=hostnet0,mac=00:10:1a:01:78:26,bus=pci.0,addr=0x4  -uuid b35f00e9-c93d-4c14-883e-0451a4331d2c -rtc base=localtime,clock=host,driftfix=slew -no-kvm-pit-reinjection -chardev socket,id=111a,path=/tmp/win7-32,server,nowait -mon chardev=111a,mode=readline -monitor stdio  -spice disable-ticketing,port=5931 -vga qxl
  
2.Upgrade driver to virtio-win-rewhql-0.1-25.

3.S3
  
Actual results:
Guest BSOD with Error code "0x000000D1" and qemu quit after finishing dump file collecting,no error found from qemu monitor side.Following is the dump file analysis:

*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)
An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high.  This is usually
caused by drivers using improper addresses.
If kernel debugger is available get stack backtrace.
Arguments:
Arg1: 00000004, memory referenced
Arg2: 0000000a, IRQL
Arg3: 00000000, value 0 = read operation, 1 = write operation
Arg4: 88424f8d, address which referenced memory

Debugging Details:
------------------


READ_ADDRESS:  00000004 

CURRENT_IRQL:  a

FAULTING_IP: 
viostor+1f8d
88424f8d 8b4804          mov     ecx,dword ptr [eax+4]

DEFAULT_BUCKET_ID:  VISTA_DRIVER_FAULT

BUGCHECK_STR:  0xD1

PROCESS_NAME:  System

TRAP_FRAME:  807e1d14 -- (.trap 0xffffffff807e1d14)
ErrCode = 00000000
eax=00000000 ebx=00000000 ecx=8447369c edx=000000b0 esi=844af5e8 edi=8447369c
eip=88424f8d esp=807e1d88 ebp=807e1d9c iopl=0         nv up ei pl nz na po nc
cs=0008  ss=0010  ds=0023  es=0023  fs=0030  gs=0000             efl=00010202
viostor+0x1f8d:
88424f8d 8b4804          mov     ecx,dword ptr [eax+4] ds:0023:00000004=????????
Resetting default scope

LAST_CONTROL_TRANSFER:  from 88424f8d to 8268c5cb

STACK_TEXT:  
807e1d14 88424f8d badb0d00 000000b0 807c1120 nt!KiTrap0E+0x2cf
WARNING: Stack unwind information not available. Following frames may be wrong.
807e1d9c 883b26ae 8447369c 00000001 807e1dbc viostor+0x1f8d
807e1dac 883b30f0 8446a188 00000001 807e1dd0 storport!RaCallMiniportMsiInterrupt+0x1b
807e1dbc 826d5b30 8447c500 8446a0e8 00000001 storport!RaidpAdapterMSIInterruptRoutine+0x29
807e1dd0 826857ad 8447c500 8446a0e8 807e1dfc nt!KiInterruptMessageDispatch+0x12
807e1dd0 82616da2 8447c500 8446a0e8 807e1dfc nt!KiInterruptDispatch+0x6d
807e1e6c 82619a4d 8446a188 844e8228 00000002 hal!HalpGenerateInterrupt+0x2ce
807e1e8c 82619ba0 00000000 84473302 807e1ea4 hal!HalpLowerIrqlHardwareInterrupts+0xf5
807e1e9c 8272c2d2 807e1eb8 883b287e 84473410 hal!KfLowerIrql+0x58
807e1ea4 883b287e 84473410 00000002 8446a0e8 nt!KeReleaseInterruptSpinLock+0x19
807e1eb8 883b2917 8446a0e8 00000002 807e1ef0 storport!RaidAdapterReleaseInterruptLock+0x54
807e1ec8 883b53bb 8446a0e8 807e1ee4 85c113b0 storport!RaidAdapterReleaseHwInitializeLock+0x14
807e1ef0 883bcdd9 8446a0e8 8446a0e8 85c113b0 storport!RaidAdapterStopAdapter+0x3d
807e1f08 883bcea3 85c113b0 8446a0e8 8446a0e8 storport!RaidAdapterDevicePowerstopAdapter+0x46
807e1f20 883b331e 85243008 8446a0a4 85fce99c storport!RaidAdapterDevicePowerDownSrbComplete+0x4d
807e1f48 826c31b5 8446a0a4 8446a030 00000000 storport!RaidpAdapterDpcRoutine+0x51
807e1fa4 826c3018 807c1120 85fce918 00000000 nt!KiExecuteAllDpcs+0xf9
807e1ff4 826c27dc 9c2a1b78 00000000 00000000 nt!KiRetireDpcList+0xd5
807e1ff8 9c2a1b78 00000000 00000000 00000000 nt!KiDispatchInterrupt+0x2c
826c27dc 00000000 0000001a 00d6850f bb830000 0x9c2a1b78


STACK_COMMAND:  kb

FOLLOWUP_IP: 
viostor+1f8d
88424f8d 8b4804          mov     ecx,dword ptr [eax+4]

SYMBOL_STACK_INDEX:  1

SYMBOL_NAME:  viostor+1f8d

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: viostor

IMAGE_NAME:  viostor.sys

DEBUG_FLR_IMAGE_TIMESTAMP:  4f81aec6

FAILURE_BUCKET_ID:  0xD1_viostor+1f8d

BUCKET_ID:  0xD1_viostor+1f8d

Followup: MachineOwner
--------------------------------------------------------------------------------

For more details,please refer to the attached "win7-32-BSOD-S3-Uprade.DMP" and "win7-32-blk-upgrad-BSOD-S3-new.png".

Expected results:
S3 should be successfully after driver upgrade.

Additional info:
1.This issue does not reproduce when downgrade driver.
2.Reboot guest after qemu quit, guest can do S3 without any error.

Comment 2 Miya Chen 2012-04-12 10:49:00 UTC
Hi Dawn, you mean this is a regression of the block driver in build 25 ? How about doing s3 with build 16 and 24? thanks.

Comment 3 Vadim Rozenfeld 2012-04-14 20:04:27 UTC

*** This bug has been marked as a duplicate of bug 811161 ***

Comment 4 dawu 2012-04-16 07:24:52 UTC
(In reply to comment #2)
> Hi Dawn, you mean this is a regression of the block driver in build 25 ? How
> about doing s3 with build 16 and 24? thanks.

Hi Miya,

This issue does not exist on 24 when upgrade from 16 to 24,and I verified this issue on the latest build 26 with the same steps in commont #0, also does not reproduce any more, more viewed, I also can not reproduce this issue on build 25. 

Best Regards,
Dawn

Comment 5 Vadim Rozenfeld 2012-04-16 07:42:42 UTC
(In reply to comment #4)
> (In reply to comment #2)
> > Hi Dawn, you mean this is a regression of the block driver in build 25 ? How
> > about doing s3 with build 16 and 24? thanks.
> 
> Hi Miya,
> 
> This issue does not exist on 24 when upgrade from 16 to 24,and I verified this
> issue on the latest build 26 with the same steps in commont #0, also does not
> reproduce any more, more viewed, I also can not reproduce this issue on build
> 25. 
Hi Dawn,
I was able to reproduce this crash with drivers from build 25. Definitely it was a regression, introduced with the flush request handler bug-fix. I believe, I managed to fix this problem in build 26, but this fix requires entire re-WHQL'ing of viostor driver.

best regards,
Vadim.
 
> 
> Best Regards,
> Dawn

Comment 6 Mike Cao 2012-04-16 07:55:56 UTC
(In reply to comment #5)
> (In reply to comment #4)
> > (In reply to comment #2)
> > > Hi Dawn, you mean this is a regression of the block driver in build 25 ? How
> > > about doing s3 with build 16 and 24? thanks.
> > 
> > Hi Miya,
> > 
> > This issue does not exist on 24 when upgrade from 16 to 24,and I verified this
> > issue on the latest build 26 with the same steps in commont #0, also does not
> > reproduce any more, more viewed, I also can not reproduce this issue on build
> > 25. 
> Hi Dawn,
> I was able to reproduce this crash with drivers from build 25. Definitely it
> was a regression, introduced with the flush request handler bug-fix. I believe,
> I managed to fix this problem in build 26, but this fix requires entire
> re-WHQL'ing of viostor driver.
> 
> best regards,
> Vadim.

Hi, Vadim

From my test ,your patch fixed 811161 , and what your request is QE's work plan in this week :) 

Mike

> 
> > 
> > Best Regards,
> > Dawn


Note You need to log in before you can comment on or make changes to this bug.