Bug 747303 - qemu-kvm with ide-drive became non-responsive when disconnection to NFS storage
Summary: qemu-kvm with ide-drive became non-responsive when disconnection to NFS storage
Keywords:
Status: CLOSED DUPLICATE of bug 740509
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Orit Wasserman
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-10-19 13:03 UTC by Chao Yang
Modified: 2014-03-04 00:24 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-07-11 14:08:35 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Chao Yang 2011-10-19 13:03:37 UTC
Description of problem:
Boot a guest located on NFS storage with ide-drive, download big file(writing to storage) after guest boot up, block NFS storage by iptables:
iptables -A OUTPUT -d 10.66.9.7 -j DROP
after a while, guest becomes non-responsive no matter raw or qcow2 format. This doesn't happen with virtio-blk-pci, guests with both raw and qcow2 format are responsive.


After disconnection to NFS storage, and wait for a moment:
IDE+raw: non-responsive
IDE+qcow2: non-responsive

virtio-blk+raw: responsive
virtio-blk+qcow2: responsive
Version-Release number of selected component (if applicable):
# rpm -q qemu-kvm 
qemu-kvm-0.12.1.2-2.199.el6.x86_64
# uname -r
2.6.32-207.el6.x86_64


How reproducible:
100%

Steps to Reproduce:
1.
2.
3.
  
Actual results:
IDE+raw/qcow2:
(qemu) info status
VM status: running
(qemu) block I/O error in device 'drive-ide-0-0': Input/output error (5)
ide_dma_cancel: aiocb still pending
ide_dma_cancel: BM_STATUS_DMAING still pending
adfadsfadfa
unknown command: 'adfadsfadfa'
(qemu) fadsfdfasd
unknown command: 'fadsfdfasd'
(qemu) info status
VM status: paused

virtio-blk+raw/qcow2:
(qemu) info status
VM status: running
(qemu) info status
VM status: running
(qemu) info status
VM status: running
(qemu) info status
VM status: running
(qemu) block I/O error in device 'drive-virtio-0-0': Input/output error (5)
block I/O error in device 'drive-virtio-0-0': Input/output error (5)

(qemu) info status
VM status: paused



Expected results:


Additional info:

Comment 2 Dor Laor 2011-10-19 13:52:00 UTC
What's the qemu command line used?
Did you issue a 'cont' after it stopped?

It might be a dup of #740509

Comment 3 Kevin Wolf 2011-10-19 14:38:01 UTC
This should have been a clone of bug 740509, in fact. QE stumbled across this problem when trying to reproduce the problem there. This one is about qemu being non-responsive until it stops the VM (and then continuing the VM is possible), the other one is about responsive qemu with a stopped VM that doesn't allow to continue the VM.

I believe this problem is caused by IDE using synchronous operations, which block the whole qemu process while waiting for the I/O error.

Comment 4 Chao Yang 2011-10-20 02:25:19 UTC
(In reply to comment #2)
> What's the qemu command line used?
I tried all combinations of raw/qcow2 + virtio/ide, following is one of the combination:
/usr/libexec/qemu-kvm -M rhel6.2.0 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name test-1 -uuid 8aceca2c-61e2-45f2-bddc-fd0843c689d9 -rtc base=utc,clock=host,driftfix=slew -boot menu=on -drive file=/mnt/rhel6.1.z-copy-x86_64.qcow2,if=none,id=drive-ide-0-0,media=disk,format=qcow2,cache=none,werror=stop,rerror=stop -device ide-drive,drive=drive-ide-0-0,id=ide0-0-0,bootindex=1 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:42:0b:12 -usb -device usb-tablet,id=input1,bus=usb.0,port=1 -vnc :1 -monitor stdio -balloon none

> Did you issue a 'cont' after it stopped?
>
Yes 

> It might be a dup of #740509
Please see comment #3

Comment 6 Orit Wasserman 2011-12-08 13:16:33 UTC
This is a very limited scenario (only ide and windows XP guest).
Due to capacity moved to 6.4

Comment 7 Orit Wasserman 2012-07-11 14:08:35 UTC

*** This bug has been marked as a duplicate of bug 740509 ***


Note You need to log in before you can comment on or make changes to this bug.