Bug 607244 - virtio-blk doesn't load list of pending requests correctly
Summary: virtio-blk doesn't load list of pending requests correctly
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
(Show other bugs)
Version: 6.0
Hardware: All Linux
Target Milestone: rc
: ---
Assignee: Kevin Wolf
QA Contact: Virtualization Bugs
Depends On:
Blocks: 621501
TreeView+ depends on / blocked
Reported: 2010-06-23 15:38 UTC by Kevin Wolf
Modified: 2013-01-09 22:46 UTC (History)
10 users (show)

Fixed In Version: qemu-kvm-
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 621501 (view as bug list)
Last Closed: 2010-11-10 21:25:39 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description Kevin Wolf 2010-06-23 15:38:12 UTC
When loading VM state for virtio-blk, requests are created, but not inserted into the list of pending requests. Therefore they are ignored (and their memory is leaked).

Comment 2 RHEL Product and Program Management 2010-06-23 15:52:52 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for

Comment 9 Shirley Zhou 2010-08-03 04:38:39 UTC
Testing this issue with qemu-kvm-, after i/o error happens on Src guest and guest become paused, then do migration. core dumped happens on Dst guest.

(gdb) bt
#0  virtio_blk_handle_request (req=0x2dea010, mrb=0x7fffe55acf60) at /usr/src/debug/qemu-kvm-
#1  0x000000000041e1cb in virtio_blk_dma_restart_bh (opaque=0x2859d00) at /usr/src/debug/qemu-kvm-
#2  0x0000000000410b5d in qemu_bh_poll () at /usr/src/debug/qemu-kvm-
#3  0x000000000040b5e9 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-
#4  0x00000000004289ca in kvm_main_loop () at /usr/src/debug/qemu-kvm-
#5  0x000000000040e47b in main_loop (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-
#6  main (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-

Comment 12 Shirley Zhou 2010-08-05 08:09:31 UTC
Verified this issue with qemu-kvm- as following scenarios:

1.live migration is ok
2.migration via file is ok
3.savevm/loadvm has some problem as following steps:

3.1 install fresh rhel6 guest 
3.2 then run guest
3.3 i/o error, guest become paused
3.4 savevm s1
3.5 (qemu)cont,resume guest
3.6 loadvm s1

error happens after step 3.6:
Error -22 while loading VM state.

Kevin, does it mean we have already fixed this bug? for loadvm issue, is it another problem?

Comment 13 Kevin Wolf 2010-08-05 08:35:35 UTC
These patches fix the crash that you reported in comment 9, so yes, I think it's complete.

You can file a separate bug for the savevm failure, but it's not 6.0 material as we don't support internal snapshots. We might support them for 6.1.

Comment 14 Shirley Zhou 2010-08-05 10:16:23 UTC
Open bug 621501 to track internal snapshot issue.
Change bug status to verified according to comment 12 and comment 13.

Comment 15 releng-rhel@redhat.com 2010-11-10 21:25:39 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.