Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 607244 - virtio-blk doesn't load list of pending requests correctly
virtio-blk doesn't load list of pending requests correctly
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Kevin Wolf
Virtualization Bugs
:
Depends On:
Blocks: 621501
  Show dependency treegraph
 
Reported: 2010-06-23 11:38 EDT by Kevin Wolf
Modified: 2013-01-09 17:46 EST (History)
10 users (show)

See Also:
Fixed In Version: qemu-kvm-0.12.1.2-2.108.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 621501 (view as bug list)
Environment:
Last Closed: 2010-11-10 16:25:39 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Kevin Wolf 2010-06-23 11:38:12 EDT
When loading VM state for virtio-blk, requests are created, but not inserted into the list of pending requests. Therefore they are ignored (and their memory is leaked).
Comment 2 RHEL Product and Program Management 2010-06-23 11:52:52 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 9 Shirley Zhou 2010-08-03 00:38:39 EDT
Testing this issue with qemu-kvm-0.12.1.2-2.104.el6, after i/o error happens on Src guest and guest become paused, then do migration. core dumped happens on Dst guest.

(gdb) bt
#0  virtio_blk_handle_request (req=0x2dea010, mrb=0x7fffe55acf60) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:332
#1  0x000000000041e1cb in virtio_blk_dma_restart_bh (opaque=0x2859d00) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:388
#2  0x0000000000410b5d in qemu_bh_poll () at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:150
#3  0x000000000040b5e9 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4244
#4  0x00000000004289ca in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2133
#5  0x000000000040e47b in main_loop (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4409
#6  main (argc=<value optimized out>, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6566
Comment 12 Shirley Zhou 2010-08-05 04:09:31 EDT
Verified this issue with qemu-kvm-0.12.1.2-2.108.el6 as following scenarios:

1.live migration is ok
2.migration via file is ok
3.savevm/loadvm has some problem as following steps:

3.1 install fresh rhel6 guest 
3.2 then run guest
3.3 i/o error, guest become paused
3.4 savevm s1
3.5 (qemu)cont,resume guest
3.6 loadvm s1

error happens after step 3.6:
Error -22 while loading VM state.

Kevin, does it mean we have already fixed this bug? for loadvm issue, is it another problem?
Comment 13 Kevin Wolf 2010-08-05 04:35:35 EDT
These patches fix the crash that you reported in comment 9, so yes, I think it's complete.

You can file a separate bug for the savevm failure, but it's not 6.0 material as we don't support internal snapshots. We might support them for 6.1.
Comment 14 Shirley Zhou 2010-08-05 06:16:23 EDT
Open bug 621501 to track internal snapshot issue.
Change bug status to verified according to comment 12 and comment 13.
Comment 15 releng-rhel@redhat.com 2010-11-10 16:25:39 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.