RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 595665 - Guest crashed after hot add virtio disk in win7 64bit guest
Summary: Guest crashed after hot add virtio disk in win7 64bit guest
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Markus Armbruster
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-25 10:32 UTC by Mike Cao
Modified: 2013-01-09 22:36 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-07-08 17:10:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mike Cao 2010-05-25 10:32:08 UTC
Description of problem:
Guest crashed after hot add virtio disk because the host core dumps

Version-Release number of selected component (if applicable):
Host:
# uname -r
2.6.32-28.el6.x86_64
Guest:
windows 7 64 bit.

How reproducible:
100%

Steps to Reproduce:
1.start windows 7 bit guest with ide disk.
2.in the qemu add the newest version virtio-win.iso by using then install virtio disk drivers in the guest
3.in the qemu,hot add a virtio disk by using :(qemu)  pci_add pci_addr=auto storage file=/Images/test3.qcow2,if=virtio
  
Actual results:
VM crashed.
in the host It shows following err msg:

(qemu) pci_add pci_addr=auto storage file=/Images/test3.qcow2,if=virtio
OK domain 0, bus 0, slot 7, function 0
(qemu) qemu: hardware error: register_ioport_write: invalid opaque
CPU #0:
RAX=000007fffffde800 RBX=fffff900c01c9c30 RCX=fffff880045d1b68 RDX=0000000000000000
RSI=0000000000000000 RDI=0000000000000000 RBP=fffff880045d1ca0 RSP=fffff880045d1a00
R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
R12=0000000000000001 R13=fffff880045d1b68 R14=0000000000000000 R15=0000000000000000
RIP=fffff96000199411 RFL=00000202 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS   [-WA]
CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
DS =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS   [-WA]
FS =0053 00000000fffe0000 00003c00 0040f300 DPL=3 DS   [-WA]
GS =002b fffff800027f4d00 ffffffff 00c0f300 DPL=3 DS   [-WA]
LDT=0000 0000000000000000 ffffffff 00000000
TR =0040 fffff80000b96080 00000067 00008b00 DPL=0 TSS64-busy
GDT=     fffff80000b95000 0000007f
IDT=     fffff80000b95080 00000fff
CR0=80050031 CR2=fffff8a001850c90 CR3=00000001030e3000 CR4=000006f8
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
DR6=00000000ffff0ff0 DR7=0000000000000400
FCW=027f FSW=0000 [ST=0] FTW=00 MXCSR=00000000
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000 XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000
XMM08=00000000000000000000000000000000 XMM09=00000000000000000000000000000000
XMM10=00000000000000000000000000000000 XMM11=00000000000000000000000000000000
XMM12=00000000000000000000000000000000 XMM13=00000000000000000000000000000000
XMM14=00000000000000000000000000000000 XMM15=00000000000000000000000000000000
CPU #1:
RAX=0000000000000407 RBX=0000000000000002 RCX=fffff88003132460 RDX=0000000000000cfc
RSI=fffff88003132620 RDI=0000000000000004 RBP=fffff88003132460 RSP=fffff880031323a8
R8 =0000000000000000 R9 =fffff88003132620 R10=0000000000000000 R11=0000000000000006
R12=fffff80002c0b890 R13=fffff88000e1d200 R14=fffff80002c111d0 R15=0000000000000001
RIP=fffff80002bf44bb RFL=00000206 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS   [-WA]
CS =0010 0000000000000000 00000000 00209b00 DPL=0 CS64 [-RA]
SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
DS =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS   [-WA]
FS =0053 00000000ffef4000 00007c00 0040f300 DPL=3 DS   [-WA]
GS =002b fffff880009e6000 ffffffff 00c0f300 DPL=3 DS   [-WA]
LDT=0000 0000000000000000 ffffffff 00000000
TR =0040 fffff880009eaec0 00000067 00008b00 DPL=0 TSS64-busy
GDT=     fffff880009f14c0 0000007f
IDT=     fffff880009f1540 00000fff
CR0=80050031 CR2=0000000077275360 CR3=0000000000187000 CR4=000006f8
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
DR6=00000000ffff0ff0 DR7=0000000000000400
FCW=027f FSW=3800 [ST=7] FTW=80 MXCSR=00000000
FPR0=9fc0000000000000 4008 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=fffff80002981e80fffff8a0000a0008 XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000 XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000
XMM08=00000000000000000000000000000000 XMM09=00000000000000000000000000000000
XMM10=00000000000000000000000000000000 XMM11=00000000000000000000000000000000
XMM12=00000000000000000000000000000000 XMM13=00000000000000000000000000000000
XMM14=00000000000000000000000000000000 XMM15=00000000000000000000000000000000
Aborted (core dumped)


Expected results:
the disk can be added successfully.


Additional info:

Comment 2 RHEL Program Management 2010-05-28 10:36:10 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Markus Armbruster 2010-06-10 08:10:43 UTC
A couple of questions:

Just to make sure: it doesn't happen with a Linux guest, does it?

Does the problem exist with device_add instead of pci_add as well?  To test, start qemu with "-drive if=none,file=/Images/test3.qcow2,id=foo", then give monitor command "device_add virtio-blk-pci"

Could you capture a stack backtrace?

Comment 4 Mike Cao 2010-06-11 10:16:28 UTC
(In reply to comment #3)
> A couple of questions:
> 
> Just to make sure: it doesn't happen with a Linux guest, does it?

yup,I test with RHEL5 ,it will NOT trigger this issue.

> Does the problem exist with device_add instead of pci_add as well?  To test,
> start qemu with "-drive if=none,file=/Images/test3.qcow2,id=foo", then give
> monitor command "device_add virtio-blk-pci"

(qemu)device_add virtio-blk-pci,drive=foo
I tried 2 times ,It does NOT trigger this issue.

> Could you capture a stack backtrace?  

(gdb)bt
0x000000366e8329c5 in raise () from /lib64/libc.so.6
#1  0x000000366e8341a5 in abort () from /lib64/libc.so.6
#2  0x000000000040cee8 in hw_error (fmt=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:355
#3  0x00000000004a013b in register_ioport_write (start=<value optimized out>, 
    length=<value optimized out>, size=6, func=0xffffffffffffffff, 
    opaque=0x7f03efe44710) at ioport.c:170
#4  0x0000000000420248 in virtio_map (pci_dev=0x1882920, 
    region_num=<value optimized out>, addr=65408, size=<value optimized out>, 
    type=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:363
#5  0x000000000041840b in pci_update_mappings (d=0x1882920)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/pci.c:1002
#6  0x00000000004209d0 in virtio_write_config (pci_dev=0x1882920, address=4, 
    val=1031, len=2) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:386
#7  0x000000000042a551 in kvm_handle_io (env=0x17c04c0)
    at /usr/src/debug/qemu-kvm-0.12.1.2/kvm-all.c:538
#8  kvm_run (env=0x17c04c0) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:969
#9  0x000000000042a5f9 in kvm_cpu_exec (env=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1652
#10 0x000000000042b21f in kvm_main_loop_cpu (_env=0x17c04c0)
    at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1894
#11 ap_main_loop (_env=0x17c04c0)
#12 0x000000366f007761 in start_thread () from /lib64/libpthread.so.0
#13 0x000000366e8e14fd in clone () from /lib64/libc.so.6

  
Additional info :
The issue was found in qemu-kvm-0.12.1.2-2.62.el6

Comment 6 Markus Armbruster 2010-07-08 17:10:59 UTC
pci_add is gone since qemu-kvm-0.12.1.2-2.76.el6 (bug 602590). According to comment#4, device_add is not affected.  Therefore, I'm going to close as NOTABUG.

Please reopen if you can reproduce it with device_add.


Note You need to log in before you can comment on or make changes to this bug.