Bug 515655 - Add result test to prevent Infinite loop in raw_pread, reading too large offset
Summary: Add result test to prevent Infinite loop in raw_pread, reading too large offset
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kvm
Version: 5.4
Hardware: All
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Kevin Wolf
QA Contact: Lawrence Lim
URL:
Whiteboard:
: 515656 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-08-05 08:55 UTC by Dor Laor
Modified: 2014-03-26 01:00 UTC (History)
7 users (show)

Fixed In Version: kvm-83-150.el5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-03-30 07:56:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2010:0271 0 normal SHIPPED_LIVE Important: kvm security, bug fix and enhancement update 2010-03-29 13:19:48 UTC

Description Dor Laor 2009-08-05 08:55:50 UTC
Description of problem:
Some VMs rarely get stuck on raw_pread. Since the offset is larger than the file size the read fails and they keep spinning.

Version-Release number of selected component (if applicable):
kvm-83-94

How reproducible:
I don't have an exact scenario. It might be boot of multiple guest together with slow cpu/storage.

Here are two stack traces:

(gdb) bt
#0  raw_pread_aligned (bs=0x1384d010, offset=40429595691239424, 
    buf=0x1384da00 "", count=4608) at block-raw-posix.c:211
#1  0x0000000000419a9b in raw_pread (bs=0x1384d010, offset=40429595691239424, 
    buf=0x2aaaaaafc010 "RCRD(", count=4608) at block-raw-posix.c:334
#2  0x000000000046c129 in bdrv_read (bs=0x1384d010, 
    sector_num=40429595691239424, buf=0x2aaaaaafc010 "RCRD(", nb_sectors=9)
    at block.c:623
#3  0x00000000004ae92a in decompress_cluster (s=0x1384a9e0, 
    cluster_offset=<value optimized out>) at block-qcow2.c:1184
#4  0x00000000004af83a in qcow_aio_read_cb (opaque=0x13926c70, ret=0)
    at block-qcow2.c:1399
#5  0x00000000004af90a in qcow_aio_read (bs=<value optimized out>, 
    sector_num=<value optimized out>, buf=<value optimized out>, 
    nb_sectors=<value optimized out>, cb=<value optimized out>, 
    opaque=<value optimized out>) at block-qcow2.c:1448
#6  0x000000000046bfc7 in bdrv_aio_read (bs=0x1384a040, sector_num=5769, 
    buf=0x2aaaaacb0200 "Y��\205�\017\205�", nb_sectors=144, 
    cb=0x432500 <ide_read_dma_cb>, opaque=0x13ba2e20) at block.c:1402
#7  0x0000000000432651 in ide_read_dma_cb (opaque=0x13ba2e20, 
    ret=<value optimized out>)
    at /usr/src/debug/kvm-83-maint-snapshot-20090205/qemu/hw/ide.c:994
#8  0x00000000005171b8 in kvm_outb (opaque=<value optimized out>, addr=55808, 
    data=2 '\002')
---Type <return> to continue, or q <return> to quit---
    at /usr/src/debug/kvm-83-maint-snapshot-20090205/qemu/qemu-kvm.c:691
#9  0x0000000000541db8 in handle_io (kvm=0x13825cc0, run=0x2aaaaac8c000, 
    vcpu=0) at libkvm.c:735
#10 0x00000000005426a2 in kvm_run (kvm=0x13825cc0, vcpu=0, env=0x138e4830)
    at libkvm.c:964
#11 0x00000000005178f9 in kvm_cpu_exec (env=0x2)
    at /usr/src/debug/kvm-83-maint-snapshot-20090205/qemu/qemu-kvm.c:210
#12 0x0000000000517b83 in ap_main_loop (_env=<value optimized out>)
    at /usr/src/debug/kvm-83-maint-snapshot-20090205/qemu/qemu-kvm.c:413
#13 0x00000038632064a7 in start_thread () from /lib64/libpthread.so.0
#14 0x0000003862ad3c2d in clone () from /lib64/libc.so.6


and on other VM:
>> 1. raw_pread() ignore EOF and short read.
>> >>
>> >> 2. offset argument on the stacked machine is 3179581371646633472
>> >> so it will never get out from the loop (raw_pread_aligned() always
>> >> return 0 for EOF)
>> >>
>> >>
>> >> #0 raw_pread_aligned (bs=0x1feb6010, offset=3179581371646633472,
>> >> buf=0x1feb6c00 "", count=3584) at block-raw-posix.c:208
>> >> #1 0x0000000000419a9b in raw_pread (bs=0x1feb6010,
>> >> offset=3179581371646633472, buf=0x2aaaaaafc1a7 "", count=3177)
>> >> at block-raw-posix.c:334
>> >> #2 0x000000000046ce8d in bdrv_pread (bs=0x1feb6010,
>> >> offset=3179581371646633065, buf1=0x2aaaaaafc010, count1=3584)
>> >> at block.c:781
>> >> #3 0x00000000004aebab in copy_sectors (bs=0x1feb32b0,
>> >> start_sect=<value optimized out>, cluster_offset=30236672,
>> >> n_start=<value optimized out>, n_end=<value optimized out>)
>> >> at block-qcow2.c:1240
>> >> #4 0x00000000004aee0e in alloc_cluster_link_l2 (bs=0x1feb32b0,
>> >> cluster_offset=30236672, m=0x1ff974b8) at block-qcow2.c:1002
>> >> #5 0x00000000004af4c2 in qcow_aio_write_cb (opaque=0x1ff97460, ret=0)
>> >> at block-qcow2.c:1470
>> >> #6 0x0000000000419807 in posix_aio_read (opaque=<value optimized out>)
>> >> at block-raw-posix.c:508
>> >> #7 0x00000000004098f2 in main_loop_wait (timeout=<value optimized out>)
>> >> at /usr/src/debug/kvm-83-maint-snapshot-20090205/qemu/vl.c:3912
>> >> #8 0x00000000005177da in kvm_main_loop ()
>> >> at /usr/src/debug/kvm-83-maint-snapshot-20090205/qemu/qemu-kvm.c:603
>> >> #9 0x000000000040e8e5 in main (argc=39, argv=0x7fff1f62a928,
>> >> ---Type <return> to continue, or q <re
>> >>
>> >>
>> >> Someone like to debug the VM?

Might this be fixed with upstream qemu commit
22afa7b5b67bb8bc58939d100363ee6753a50467?

Kevin

Comment 1 Oded Ramraz 2009-08-05 09:03:20 UTC
*** Bug 515656 has been marked as a duplicate of this bug. ***

Comment 2 Dor Laor 2009-08-05 10:35:05 UTC
It happened because mgmt run two VMs with the same storage....

We need to add an assert there in case it happens.

Comment 3 Kevin Wolf 2009-10-30 09:00:14 UTC
You mean literally an assert()? Not sure if it's the right way to inform the user (his VM will just disappear). The condition means that the image has already been corrupted, but aborting could at least avoid further damage.

Comment 4 Dor Laor 2009-11-02 14:45:19 UTC
(In reply to comment #3)
> You mean literally an assert()? Not sure if it's the right way to inform the

Yap.

> user (his VM will just disappear). The condition means that the image has
> already been corrupted, but aborting could at least avoid further damage.  

Right, that's why we need assert. Otherwise the VM is not responding (and parallel vcpus doing IO might make the corruption worse) and consumes 100% cpu.

Comment 9 edison 2010-02-02 23:06:34 UTC
I got stuck at this bug for a long time. It's randomly happened one time a week, or few times a week... Even worse, if qemu is hung, libvirt is also hung, because when libvirt sends a command to qemu, waits the return from qemu forever. I am using RHEL5.4.
Is it already fixed?

Comment 10 Yolkfull Chow 2010-02-03 03:05:44 UTC
Hi Dor,

Do you have any reproducer of this bug or how can we verify it?

I tried to verify it on kvm-83-154 which has patch applied with following steps:

1) set a breakpoint in gdb at: qemu/block-raw-posix.c:raw_pread
2) set offset=40429595691239424 which shown in above backtrace
3) run qemu process and 'next' until qemu process really exit with following error:

...
drive_open (drive=
      {bdrv = 0x10c7010, type = IF_IDE, bus = 0, unit = 0, onerror = BLOCK_ERR_STOP_ENOSPC, serial = '\0' <repeats 20 times>, used = 1, drive_opt_idx = 0, opened = 1, bdrv_flags = 128, file = 0x10c4920 "/tmp/kvm_autotest_root/images/RHEL-Server-5.4-32.raw", drv = 0x0}) at /usr/src/redhat/BUILD/kvm-83-maint-snapshot-20090205/qemu/vl.c:2723
2723            fprintf(stderr, "qemu: could not open disk image %s\n",
...

It looks like that it doesn't trigger the 'abort()' though exited. Did I miss something or it should be like this? 

Thanks in advance.

Comment 11 Dor Laor 2010-02-10 14:52:54 UTC
Kevin, can you answer it?

Comment 12 Kevin Wolf 2010-02-12 08:56:16 UTC
It's a bit tricky to get this condition, but after some attempts it works for me this way:

1. Run qemu-io with cache=off in gdb: gdb --args ./qemu-io /tmp/floppy.img -n
2. Set the breakpoint: break raw_pread
3. Run the program, and when the breakpoint triggers let it continue so that you get to the qemu-io prompt
4. qemu-io> read -p 0 234 (important is to take an misaligned size here)
5. The breakpoint triggers, change the offset to a bad value: set offset=12345678 (you'll probably need higher values, this one works for me as I used a floppy image that was lying around) and continue
6. Watch the old version hang in an endless loop and observe the abort() in the new one

Comment 13 Yolkfull Chow 2010-02-23 05:25:25 UTC
Hi Dor and Kevin,

Thank you very much for support on this bug. 
I verified it according to above steps, following is the test result:


On kvm-83-154 which had the patch applied, it aborted with error message "read beyond end of file":

# dd if=/dev/zero of=/tmp/floppy.img bs=512 count=2880
2880+0 records in
2880+0 records out
1474560 bytes (1.5 MB) copied, 0.028629 seconds, 51.5 MB/s
[root@linux1 qemu]# 
[root@linux1 qemu]# 
[root@linux1 qemu]# gdb --args ./qemu-io /tmp/floppy.img -n
GNU gdb Fedora (6.8-27.el5)
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu"...
(gdb) break raw_pread
Breakpoint 1 at 0x4189f0: file block-raw-posix.c, line 292.
(gdb) run
Starting program: /usr/src/redhat/BUILD/kvm-83-maint-snapshot-20090205/qemu/qemu-io /tmp/floppy.img -n
[Thread debugging using libthread_db enabled]
[New Thread 0x2b4d92e8a9f0 (LWP 4107)]
[New Thread 0x41442940 (LWP 4110)]

Breakpoint 1, raw_pread (bs=0x10d0e2b0, offset=0, buf=0x7fff345ccfc0 "p��\222M+", count=2048) at block-raw-posix.c:292
292     {
(gdb) c
Continuing.
qemu-io> read -p 0 234

Breakpoint 1, raw_pread (bs=0x10d0d9c0, offset=0, buf=0x10d0e800 '�' <repeats 200 times>..., count=234) at block-raw-posix.c:292
292     {
(gdb) set offset=1234567678899
(gdb) s
293         BDRVRawState *s = bs->opaque;
(gdb) s
298         if (s->aligned_buf != NULL)  {
(gdb) c
Continuing.
raw_pread: read beyond end of file

Program received signal SIGABRT, Aborted.
0x0000003a48a30215 in raise () from /lib64/libc.so.6
(gdb) 


----

On kvm-83-149 which has not the patch applied yet, it hang until I press 'Ctrl-C':

# gdb --args ./qemu-io /tmp/floppy.img -n
GNU gdb Fedora (6.8-27.el5)
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu"...
(gdb) break raw_pread
Breakpoint 1 at 0x418610: file block-raw-posix.c, line 292.
(gdb) run
Starting program:
/usr/src/redhat/BUILD/kvm-83-maint-snapshot-20090205/qemu/qemu-io
/tmp/floppy.img -n
[Thread debugging using libthread_db enabled]
[New Thread 0x2b3be92d79f0 (LWP 11572)]
[New Thread 0x40dc3940 (LWP 11575)]

Breakpoint 1, raw_pread (bs=0x5b412b0, offset=0, buf=0x7fff5936cf30 "p�,�;+",
count=2048) at block-raw-posix.c:292
292     {
(gdb) c
Continuing.
qemu-io> read -p 0 234

Breakpoint 1, raw_pread (bs=0x5b409c0, offset=0, buf=0x5b41800 '�' <repeats
200 times>..., count=234) at block-raw-posix.c:292
292     {
(gdb) set offset=12345678
(gdb) c
Continuing.




Program received signal SIGINT, Interrupt.
0x0000003a4960d2cb in read () from /lib64/libpthread.so.0
(gdb)

Comment 16 errata-xmlrpc 2010-03-30 07:56:35 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2010-0271.html


Note You need to log in before you can comment on or make changes to this bug.