RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 887844 - AHCI does not support restarting requests (i.e. rerror=stop and werror=stop/enospc)
Summary: AHCI does not support restarting requests (i.e. rerror=stop and werror=stop/e...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: John Snow
QA Contact: jingzhao
URL:
Whiteboard:
Depends On: 953062
Blocks: 1227278 1305606 1313485 1375520
TreeView+ depends on / blocked
 
Reported: 2012-12-17 13:02 UTC by Paolo Bonzini
Modified: 2017-06-28 07:30 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-rhev-2.5.0-1.el7
Doc Type: Bug Fix
Doc Text:
Due to asychronous I/O control blocks (AIOCBs) not being properly cleared, guests that use the Advanced Host Controller Interface (AHCI) in some cases terminated unexpectedly when an I/O error occurred. With this update, AIOCB is cleared properly, and I/O errors on guests with AHCI are resolved gracefully.
Clone Of:
: 1375520 (view as bug list)
Environment:
Last Closed: 2016-11-07 20:09:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 0 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description Paolo Bonzini 2012-12-17 13:02:26 UTC
Description of problem:
AHCI provides a dummy implementation of restart_cb.  Hence, it will not properly restart after the VM is paused for an I/O error (including ENOSPC).

Version-Release number of selected component (if applicable):
1.3.0

Comment 4 John Snow 2015-04-07 00:04:23 UTC
The patches that enable this have gone upstream, but it's impossible to meaningfully test them without AHCI migration, so this issue is now blocked on #901631.

Comment 7 John Snow 2016-01-13 21:08:27 UTC
Included in rebase to qemu 2.5.0.

Instructions for testing:

- Use -werror=stop or -rerror=stop
- Trigger an ENOSPC error either through blkdebug or a legitimate out of space error.
- Observe VM has paused.
- If not using BLKDEBUG method, correct ENOSPC issue
- Resume VM
- Observe that VM is running and transfer(s) have succeeded.

The qemu/tests/ahci-test suite already tests this workflow.

Comment 9 jingzhao 2016-08-17 10:04:01 UTC
Reproduce the bz on host-kernel-3.10.0-327.8.1.el7.x86_64 and qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64

1. Boot guest and create a I/O error

2. check the info in hmp
(qemu) info status
VM status: running

3. qemu core dump
qemu) qemu-kvm: hw/ide/core.c:672: ide_handle_rw_error: Assertion `s->bus->retry_unit == s->unit' failed.
pc.sh: line 28:  5019 Aborted                 (core dumped) /usr/libexec/qemu-kvm -M pc -cpu SandyBridge -nodefaults -rtc base=utc -m 4G -smp 2,sockets=2,cores=1,threads=1 -enable-kvm -name rhel7.3 -uuid 990ea161-6b67-47b2-b803-19fb01d30d12 -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -nodefaults -serial unix:/tmp/serial0,server,nowait -boot menu=on -bios /usr/share/seabios/bios.bin -chardev file,path=/home/seabios.log,id=seabios -device isa-debugcon,chardev=seabios,iobase=0x402 -qmp tcp:0:6666,server,nowait -device VGA,id=video -vnc :2 -drive file=/home/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-net-pci,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 -device ahci,id=ahci0 -drive file=/dev/sdb,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,rerror=stop -device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1,bus=ahci0.0 -monitor stdio

(gdb) bt
#0  0x00007f2678b555f7 in raise () from /lib64/libc.so.6
#1  0x00007f2678b56ce8 in abort () from /lib64/libc.so.6
#2  0x00007f2678b4e566 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007f2678b4e612 in __assert_fail () from /lib64/libc.so.6
#4  0x00007f2680865616 in ide_handle_rw_error (s=0x7f2684b4a0b8, error=28, op=64) at hw/ide/core.c:672
#5  0x00007f26808656ac in ide_flush_cb (opaque=0x7f2684b4a0b8, ret=<optimized out>) at hw/ide/core.c:915
#6  0x00007f26808ff5ce in bdrv_co_em_bh (opaque=0x7f2683ffd510) at block.c:5001
#7  0x00007f26808f8734 in aio_bh_poll (ctx=ctx@entry=0x7f2681d80840) at async.c:85
#8  0x00007f2680907869 in aio_dispatch_clients (ctx=0x7f2681d80840, client_mask=client_mask@entry=-1)
    at aio-posix.c:139
#9  0x00007f2680907c2a in aio_dispatch (ctx=<optimized out>) at aio-posix.c:194
#10 0x00007f26808f85ae in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, 
    user_data=<optimized out>) at async.c:219
#11 0x00007f267eb3d79a in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#12 0x00007f26809066e8 in glib_pollfds_poll () at main-loop.c:209
#13 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:254
#14 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:503
#15 0x00007f2680704bfe in main_loop () at vl.c:1818
#16 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4391
(gdb) 

2. Test it on host-kernel-3.10.0-492.el7.x86_64 and qemu-kvm-rhev-2.6.0-20.el7.x86_64 and found the bz also failed

1. Boot guest and create a I/O error

2. qemu core dump
qemu) 887844sh: line 28:  7582 Segmentation fault      (core dumped) /usr/libexec/qemu-kvm -M pc -cpu SandyBridge -nodefaults -rtc base=utc -m 4G -smp 2,sockets=2,cores=1,threads=1 -enable-kvm -name rhel7.3 -uuid 990ea161-6b67-47b2-b803-19fb01d30d12 -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -nodefaults -serial unix:/tmp/serial0,server,nowait -boot menu=on -bios /usr/share/seabios/bios.bin -chardev file,path=/home/seabios.log,id=seabios -device isa-debugcon,chardev=seabios,iobase=0x402 -qmp tcp:0:6666,server,nowait -device VGA,id=video -vnc :2 -drive file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-net-pci,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 -device ahci,id=ahci0 -drive file=/dev/sdc,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,rerror=stop -device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1,bus=ahci0.0 -monitor stdio

(gdb) bt
#0  0x00007f638d863de0 in ?? ()
#1  0x00007f6388db872a in bdrv_aio_cancel_async (acb=0x7f638d862520) at block/io.c:2060
#2  bdrv_aio_cancel (acb=0x7f638d862520) at block/io.c:2041
#3  0x00007f6388dad7c5 in blk_aio_cancel (acb=<optimized out>) at block/block-backend.c:1044
#4  0x00007f6388ccb7bd in ahci_reset_port (port=<optimized out>, s=<optimized out>) at hw/ide/ahci.c:599
#5  0x00007f6388ccc734 in ahci_port_write (val=<optimized out>, offset=<optimized out>, port=0, s=0x7f638c953e90)
    at hw/ide/ahci.c:301
#6  ahci_mem_write (opaque=0x7f638c953e90, addr=<optimized out>, val=768, size=<optimized out>) at hw/ide/ahci.c:435
#7  0x00007f6388b8dfa3 in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, 
    value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-2.6.0/memory.c:525
#8  0x00007f6388b8bf09 in access_with_adjusted_size (addr=addr@entry=300, value=value@entry=0x7f6373d36878, 
    size=size@entry=4, access_size_min=<optimized out>, access_size_max=<optimized out>, 
    access=access@entry=0x7f6388b8df60 <memory_region_write_accessor>, mr=mr@entry=0x7f638c953eb8, 
    attrs=attrs@entry=...) at /usr/src/debug/qemu-2.6.0/memory.c:591
#9  0x00007f6388b8f725 in memory_region_dispatch_write (mr=mr@entry=0x7f638c953eb8, addr=addr@entry=300, data=768, 
    size=size@entry=4, attrs=attrs@entry=...) at /usr/src/debug/qemu-2.6.0/memory.c:1273
#10 0x00007f6388b52329 in address_space_write_continue (mr=0x7f638c953eb8, l=4, addr1=300, len=4, 
    buf=0x7f6388956028 <Address 0x7f6388956028 out of bounds>, attrs=..., addr=4273811756, 
    as=0x7f638939bb60 <address_space_memory>) at /usr/src/debug/qemu-2.6.0/exec.c:2593
#11 address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, 
    len=<optimized out>) at /usr/src/debug/qemu-2.6.0/exec.c:2651
#12 0x00007f6388b5289d in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=..., 
    buf=buf@entry=0x7f6388956028 <Address 0x7f6388956028 out of bounds>, len=<optimized out>, 
    is_write=<optimized out>) at /usr/src/debug/qemu-2.6.0/exec.c:2754
#13 0x00007f6388b8b0e0 in kvm_cpu_exec (cpu=cpu@entry=0x7f638c932000) at /usr/src/debug/qemu-2.6.0/kvm-all.c:1950
#14 0x00007f6388b79bd6 in qemu_kvm_cpu_thread_fn (arg=0x7f638c932000) at /usr/src/debug/qemu-2.6.0/cpus.c:1076
#15 0x00007f637e801dc5 in start_thread () from /lib64/libpthread.so.0
#16 0x00007f637cf3c73d in clone () from /lib64/libc.so.6

So I think the bz didn't fixed 

Thanks
Jing Zhao

Comment 10 John Snow 2016-08-31 19:23:36 UTC
Do you have a script or detailed info on reproducing the crash?
The test case that (is supposed to) test this functionality appears to work for me:


../../configure --target-list=x86_64-softmmu --disable-werror; and make -j9;
jhuston@scv ((qemu-kvm-rhev-2.6.0-22.el7)) ~/s/q/b/git> make tests/ahci-test
jhuston@scv ((qemu-kvm-rhev-2.6.0-22.el7)) ~/s/q/b/git> set -x QTEST_QEMU_BINARY ./x86_64-softmmu/qemu-system-x86_64
jhuston@scv ((qemu-kvm-rhev-2.6.0-22.el7)) ~/s/q/b/git> set -x QTEST_QEMU_IMG ./qemu-img
jhuston@scv ((qemu-kvm-rhev-2.6.0-22.el7)) ~/s/q/b/git> ./tests/ahci-test 
/x86_64/ahci/sanity: OK
/x86_64/ahci/pci_spec: OK
/x86_64/ahci/pci_enable: OK
/x86_64/ahci/hba_spec: OK
/x86_64/ahci/hba_enable: OK
/x86_64/ahci/identify: OK
/x86_64/ahci/max: OK
/x86_64/ahci/reset: OK
/x86_64/ahci/io/pio/lba28/simple/zero: OK
/x86_64/ahci/io/pio/lba28/simple/low: OK
/x86_64/ahci/io/pio/lba28/simple/high: OK
/x86_64/ahci/io/pio/lba28/double/zero: OK
/x86_64/ahci/io/pio/lba28/double/low: OK
/x86_64/ahci/io/pio/lba28/double/high: OK
/x86_64/ahci/io/pio/lba28/long/zero: OK
/x86_64/ahci/io/pio/lba28/long/low: OK
/x86_64/ahci/io/pio/lba28/long/high: OK
/x86_64/ahci/io/pio/lba28/short/zero: OK
/x86_64/ahci/io/pio/lba28/short/low: OK
/x86_64/ahci/io/pio/lba28/short/high: OK
/x86_64/ahci/io/pio/lba48/simple/zero: OK
/x86_64/ahci/io/pio/lba48/simple/low: OK
/x86_64/ahci/io/pio/lba48/simple/high: OK
/x86_64/ahci/io/pio/lba48/double/zero: OK
/x86_64/ahci/io/pio/lba48/double/low: OK
/x86_64/ahci/io/pio/lba48/double/high: OK
/x86_64/ahci/io/pio/lba48/long/zero: OK
/x86_64/ahci/io/pio/lba48/long/low: OK
/x86_64/ahci/io/pio/lba48/long/high: OK
/x86_64/ahci/io/pio/lba48/short/zero: OK
/x86_64/ahci/io/pio/lba48/short/low: OK
/x86_64/ahci/io/pio/lba48/short/high: OK
/x86_64/ahci/io/dma/lba28/fragmented: OK
/x86_64/ahci/io/dma/lba28/retry: OK           <---
/x86_64/ahci/io/dma/lba28/simple/zero: OK
/x86_64/ahci/io/dma/lba28/simple/low: OK
/x86_64/ahci/io/dma/lba28/simple/high: OK
/x86_64/ahci/io/dma/lba28/double/zero: OK
/x86_64/ahci/io/dma/lba28/double/low: OK
/x86_64/ahci/io/dma/lba28/double/high: OK
/x86_64/ahci/io/dma/lba28/long/zero: OK
/x86_64/ahci/io/dma/lba28/long/low: OK
/x86_64/ahci/io/dma/lba28/long/high: OK
/x86_64/ahci/io/dma/lba28/short/zero: OK
/x86_64/ahci/io/dma/lba28/short/low: OK
/x86_64/ahci/io/dma/lba28/short/high: OK
/x86_64/ahci/io/dma/lba48/simple/zero: OK
/x86_64/ahci/io/dma/lba48/simple/low: OK
/x86_64/ahci/io/dma/lba48/simple/high: OK
/x86_64/ahci/io/dma/lba48/double/zero: OK
/x86_64/ahci/io/dma/lba48/double/low: OK
/x86_64/ahci/io/dma/lba48/double/high: OK
/x86_64/ahci/io/dma/lba48/long/zero: OK
/x86_64/ahci/io/dma/lba48/long/low: OK
/x86_64/ahci/io/dma/lba48/long/high: OK
/x86_64/ahci/io/dma/lba48/short/zero: OK
/x86_64/ahci/io/dma/lba48/short/low: OK
/x86_64/ahci/io/dma/lba48/short/high: OK
/x86_64/ahci/io/ncq/simple: OK
/x86_64/ahci/io/ncq/retry: OK                 <---
/x86_64/ahci/flush/simple: OK
/x86_64/ahci/flush/retry: OK                  <---
/x86_64/ahci/flush/migrate: OK
/x86_64/ahci/migrate/sanity: OK
/x86_64/ahci/migrate/dma/simple: OK
/x86_64/ahci/migrate/dma/halted: OK           <---
/x86_64/ahci/migrate/ncq/simple: OK
/x86_64/ahci/migrate/ncq/halted: OK           <---
/x86_64/ahci/cdrom/dma/single: OK
/x86_64/ahci/cdrom/dma/multi: OK
/x86_64/ahci/cdrom/pio/single: OK
/x86_64/ahci/cdrom/pio/multi: OK

jhuston@scv ((qemu-kvm-rhev-2.6.0-22.el7)) ~/s/q/b/git>

Comment 11 jingzhao 2016-09-01 04:48:49 UTC
Hi John 

  Following is the detailed steps for reproducing the bz

1. Create a iscsi disk on the server side

[root@localhost home]# qemu-img info disk1.raw 
image: disk1.raw
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 14M

o- iscsi ................................................................................................ [Targets: 1]
  o- iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.e9c8826cb25c ........................................... [TPGs: 1]
    o- tpg1 ................................................................................... [no-gen-acls, no-auth]
      o- acls .............................................................................................. [ACLs: 1]
      | o- iqn.1994-05.com.redhat:71acea9bed83 ...................................................... [Mapped LUNs: 1]
      |   o- mapped_lun0 ..................................................................... [lun0 fileio/str0 (rw)]
      o- luns .............................................................................................. [LUNs: 1]
      | o- lun0 ...................................................................... [fileio/str0 (/home/disk1.raw)]
      o- portals ........................................................................................ [Portals: 1]
        o- 0.0.0.0:3260 ......................................................................................... [OK]

2. client: mount the iscsi disk 
mount /dev/sdb /mnt
In /mnt: qemu-img create -f qcow2 test.qcow2 8G

3. Boot the guest with the test.qcow2
[root@localhost home]# cat /home/bug/887844sh 
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge \
-nodefaults -rtc base=utc \
-m 4G \
-smp 2,sockets=2,cores=1,threads=1 \
-enable-kvm \
-name rhel7.3 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-nodefaults \
-serial unix:/tmp/serial0,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-chardev file,path=/home/seabios.log,id=seabios \
-device isa-debugcon,chardev=seabios,iobase=0x402 \
-qmp tcp:0:6666,server,nowait \
-device VGA,id=video \
-vnc :2 \
-drive file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop \
-device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-device virtio-net-pci,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
-device ahci,id=ahci0 \
-drive file=/mnt/test.qcow2,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,rerror=stop \
-device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1,bus=ahci0.0 \
-monitor stdio \

4.In guest
[root@localhost ~]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda             8:0    0   8G  0 disk 
vda           252:0    0  30G  0 disk 
├─vda1        252:1    0   1G  0 part /boot
└─vda2        252:2    0  29G  0 part 
  ├─rhel-root 253:0    0  26G  0 lvm  /
  └─rhel-swap 253:1    0   3G  0 lvm  [SWAP]
[root@localhost ~]# dd if=/dev/zero of=/dev/sda bs=1M count=8192


Actual result: qemu quit with core file
(gdb) bt
#0  0x00007f6a3c2dd3c0 in ?? ()
#1  0x00007f6a365dba7a in bdrv_aio_cancel_async (acb=0x7f6a3c2dd4e0) at block/io.c:2060
#2  bdrv_aio_cancel (acb=0x7f6a3c2dd4e0) at block/io.c:2041
#3  0x00007f6a365d0b15 in blk_aio_cancel (acb=<optimized out>) at block/block-backend.c:1044
#4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>, s=<optimized out>) at hw/ide/ahci.c:599
#5  0x00007f6a364efa14 in ahci_port_write (val=<optimized out>, offset=<optimized out>, port=0, s=0x7f6a3a609390)
    at hw/ide/ahci.c:301
#6  ahci_mem_write (opaque=0x7f6a3a609390, addr=<optimized out>, val=768, size=<optimized out>) at hw/ide/ahci.c:435
#7  0x00007f6a363b0343 in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, 
    value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-2.6.0/memory.c:525
#8  0x00007f6a363ae2a9 in access_with_adjusted_size (addr=addr@entry=300, value=value@entry=0x7f6a2153e878, 
    size=size@entry=4, access_size_min=<optimized out>, access_size_max=<optimized out>, 
    access=access@entry=0x7f6a363b0300 <memory_region_write_accessor>, mr=mr@entry=0x7f6a3a6093b8, 
    attrs=attrs@entry=...) at /usr/src/debug/qemu-2.6.0/memory.c:591
#9  0x00007f6a363b1ac5 in memory_region_dispatch_write (mr=mr@entry=0x7f6a3a6093b8, addr=addr@entry=300, data=768, 
    size=size@entry=4, attrs=attrs@entry=...) at /usr/src/debug/qemu-2.6.0/memory.c:1273
#10 0x00007f6a36374599 in address_space_write_continue (mr=0x7f6a3a6093b8, l=4, addr1=300, len=4, 
    buf=0x7f6a36178028 <Address 0x7f6a36178028 out of bounds>, attrs=..., addr=4273811756, 
    as=0x7f6a36bbfd40 <address_space_memory>) at /usr/src/debug/qemu-2.6.0/exec.c:2599
#11 address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, 
    len=<optimized out>) at /usr/src/debug/qemu-2.6.0/exec.c:2657
#12 0x00007f6a36374b0d in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=..., 
    buf=buf@entry=0x7f6a36178028 <Address 0x7f6a36178028 out of bounds>, len=<optimized out>, 
    is_write=<optimized out>) at /usr/src/debug/qemu-2.6.0/exec.c:2760
#13 0x00007f6a363ad480 in kvm_cpu_exec (cpu=cpu@entry=0x7f6a3a5e8000) at /usr/src/debug/qemu-2.6.0/kvm-all.c:1969
#14 0x00007f6a3639be46 in qemu_kvm_cpu_thread_fn (arg=0x7f6a3a5e8000) at /usr/src/debug/qemu-2.6.0/cpus.c:1076
#15 0x00007f6a2c009dc5 in start_thread () from /lib64/libpthread.so.0
#16 0x00007f6a2a74473d in clone () from /lib64/libc.so.6


ps: the test version
host kernel:3.10.0-496.el7.x86_64
qemu-kvm-rhev-2.6.0-22.el7.x86_64

Thanks
Jing Zhao

Comment 12 John Snow 2016-09-02 17:40:27 UTC
Looks like we're running into the reset-while-halted bug, see https://bugzilla.redhat.com/show_bug.cgi?id=1299876

To clarify your debug process:

(1) You create an iSCSI disk that can only hold about 5GB
(2) You begin copying data in the guest, attempting to fill an 8GB disk
(3) The iSCSI partition/destination fills, triggering, presumably, an -ENOSPC
(4) QEMU crashes

Correct?

Q1: You aren't getting as far as attempting to resume the command, is that true?
Q2: Are you witnessing a STOP qmp event, or not?

Where I am surprised is this:

- Upon ENOSPC, I expect QEMU to *halt* and not report the error back to the guest, so why is the guest issuing an AHCI port reset? (Which triggers BZ 1299876)

Thanks for the details.

Comment 13 jingzhao 2016-09-05 01:13:54 UTC
(In reply to John Snow from comment #12)
> Looks like we're running into the reset-while-halted bug, see
> https://bugzilla.redhat.com/show_bug.cgi?id=1299876
> 
> To clarify your debug process:
> 
> (1) You create an iSCSI disk that can only hold about 5GB
> (2) You begin copying data in the guest, attempting to fill an 8GB disk
--create a 8G disk that attached to guest with qemu cmd(qemu-img create -f qcow2 ** 8G)
> (3) The iSCSI partition/destination fills, triggering, presumably, an -ENOSPC
---In guest, try to write/read used dd cmd (see the step4 of comment11)
> (4) QEMU crashes
> 
> Correct?
---Yes, correct

> 
> Q1: You aren't getting as far as attempting to resume the command, is that
> true?
---Yes, I am not because qemu had been crashed

> Q2: Are you witnessing a STOP qmp event, or not?
----No, I am not
> 
> Where I am surprised is this:
> 
> - Upon ENOSPC, I expect QEMU to *halt* and not report the error back to the
> guest, so why is the guest issuing an AHCI port reset? (Which triggers BZ
> 1299876)
---i think bz 1299876 is different with the bz, because I didn't do the "system_reset" steps
> 
> Thanks for the details.

Thanks
Jing Zhao

Comment 14 John Snow 2016-09-07 21:19:05 UTC
Q1) What guest are you testing?
Q2) How reproducible is the failure?

The crash is related to a RESET request occurring after we have halted one or more write requests (BZ #1299876), the trace provided is unmistakable:

#4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>, s=<optimized out>) at hw/ide/ahci.c:599
#5  0x00007f6a364efa14 in ahci_port_write (val=<optimized out>, offset=<optimized out>, port=0, s=0x7f6a3a609390)

This is 100% a guest-originated request to reset the AHCI device.

However, I am not having luck reproducing this. The halt is working for me.

===

jhuston@scv ((qemu-kvm-rhev-2.6.0-23.el7)) ~/s/q/b/git> ./x86_64-softmmu/qemu-system-x86_64 \
                                                         -enable-kvm \
                                                         -M q35 \
                                                         -m 4096 \
                                                         -cpu host \
                                                         -smp 2,sockets=2,cores=1,threads=1 \
                                                         -drive if=none,id=vda,file=/media/ext/img/f24.qcow2 \
                                                         -device virtio-blk,drive=vda,bootindex=1 \
                                                         -drive if=none,id=sda,file=/dev/loop1,rerror=stop,werror=stop,format=qcow2 \
                                                         -device ide-hd,drive=sda \
                                                         -qmp tcp::4444,server,nowait \
                                                         -monitor vc

Here, "vda" is a standard virtio disk hosting a Fedora 24 guest.
"sda" is a /dev/loop1 device that was setup as such:

> touch /media/ext/lo/lo.1g
> truncate --size=1073741824 /media/ext/lo/lo.1g
> losetup -f /media/ext/lo/lo.1g
> qemu-img create -f qcow2 /dev/loop1 10G

So we have a disk that is 1G long at best, but pretends to be 10G.
Now, when I boot the fedora guest, I do this:

# dd if=/dev/zero of=/dev/sda bs=1M count=8192

I see this in my QMP stream:

{u'timestamp': {u'seconds': 1473282362, u'microseconds': 442612}, u'data': {u'nospace': True, u'action': u'stop', u'reason': u'No space left on device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason': u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473282362, u'microseconds': 442726}, u'data': {u'nospace': True, u'action': u'stop', u'reason': u'No space left on device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason': u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473282362, u'microseconds': 442966}, u'event': u'STOP'}


I see a few errors (not surprising, NCQ runs multiple commands in parallel) and then a STOP command. no crash, no reset requests.

Comment 15 jingzhao 2016-09-08 05:12:08 UTC
(In reply to John Snow from comment #14)
> Q1) What guest are you testing?
---Rhel guest 7.3
> Q2) How reproducible is the failure?

3/3


> 
> The crash is related to a RESET request occurring after we have halted one
> or more write requests (BZ #1299876), the trace provided is unmistakable:
> 
> #4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>,
> s=<optimized out>) at hw/ide/ahci.c:599
> #5  0x00007f6a364efa14 in ahci_port_write (val=<optimized out>,
> offset=<optimized out>, port=0, s=0x7f6a3a609390)
> 
> This is 100% a guest-originated request to reset the AHCI device.

John, you think it's the same with bz1299876? right?

> 
> However, I am not having luck reproducing this. The halt is working for me.
> 
> ===
> 
> jhuston@scv ((qemu-kvm-rhev-2.6.0-23.el7)) ~/s/q/b/git>
> ./x86_64-softmmu/qemu-system-x86_64 \
>                                                          -enable-kvm \
>                                                          -M q35 \
>                                                          -m 4096 \
>                                                          -cpu host \
>                                                          -smp
> 2,sockets=2,cores=1,threads=1 \
>                                                          -drive
> if=none,id=vda,file=/media/ext/img/f24.qcow2 \
>                                                          -device
> virtio-blk,drive=vda,bootindex=1 \
>                                                          -drive
> if=none,id=sda,file=/dev/loop1,rerror=stop,werror=stop,format=qcow2 \
>                                                          -device
> ide-hd,drive=sda \
>                                                          -qmp
> tcp::4444,server,nowait \
>                                                          -monitor vc
> 
> Here, "vda" is a standard virtio disk hosting a Fedora 24 guest.
> "sda" is a /dev/loop1 device that was setup as such:
> 
> > touch /media/ext/lo/lo.1g
> > truncate --size=1073741824 /media/ext/lo/lo.1g
> > losetup -f /media/ext/lo/lo.1g
> > qemu-img create -f qcow2 /dev/loop1 10G
> 
> So we have a disk that is 1G long at best, but pretends to be 10G.
> Now, when I boot the fedora guest, I do this:
> 
> # dd if=/dev/zero of=/dev/sda bs=1M count=8192
> 
> I see this in my QMP stream:
> 
> {u'timestamp': {u'seconds': 1473282362, u'microseconds': 442612}, u'data':
> {u'nospace': True, u'action': u'stop', u'reason': u'No space left on
> device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason':
> u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
> {u'timestamp': {u'seconds': 1473282362, u'microseconds': 442726}, u'data':
> {u'nospace': True, u'action': u'stop', u'reason': u'No space left on
> device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason':
> u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
> {u'timestamp': {u'seconds': 1473282362, u'microseconds': 442966}, u'event':
> u'STOP'}
> 
> 
> I see a few errors (not surprising, NCQ runs multiple commands in parallel)
> and then a STOP command. no crash, no reset requests.

And I checked the bz with above steps
1.Can reproduce the bz on qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64

2. Tested it on qemu-kvm-rhev-2.6.0-22.el7.x86_64
1)Boot guest with following command:
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge \
-nodefaults -rtc base=utc \
-m 4G \
-smp 2,sockets=2,cores=1,threads=1 \
-enable-kvm \
-name rhel7.3 \
-k en-us \
-nodefaults \
-serial unix:/tmp/serial0,server,nowait \
-boot menu=on \
-qmp tcp:0:6666,server,nowait \
-vga qxl \
-vnc :2 \
-drive file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop \
-device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-device ahci,id=ahci0 \
-drive file=/dev/loop0,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,rerror=stop \
-device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1i,bus=ahci0.0 \
-monitor stdio \

2.In guest
# dd if=/dev/zero of=/dev/sda bs=1M count=8192

3. (qemu) info status
VM status: paused (io-error)

4.enlarge the size with truncate -s 10737418240 lo.1g
check the size
-rw-r--r--. 1 root root 10737418240 Sep  8 13:02 lo.1g

5. In hmp 
(qemu) c
(qemu) info status
VM status: paused (io-error)

So I want to confirm with you about the step 4 and step 5, are they right? is it the expected result?

Thanks
Jing

Comment 16 John Snow 2016-09-08 18:45:13 UTC
(In reply to jingzhao from comment #15)
> (In reply to John Snow from comment #14)
> > Q1) What guest are you testing?
> ---Rhel guest 7.3
> > Q2) How reproducible is the failure?
> 
> 3/3
> 

3/3 it segfaults, or 3/3 it doesn't appear to halt/resume correctly?

I am trying to determine if the RHEV 2.6.0 based QEMU crashes on ENOSPC or not. From what I can tell from your reply here, it does not, but there may be issues with resume appearing to do nothing.

Am I correct?

> 
> > 
> > The crash is related to a RESET request occurring after we have halted one
> > or more write requests (BZ #1299876), the trace provided is unmistakable:
> > 
> > #4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>,
> > s=<optimized out>) at hw/ide/ahci.c:599
> > #5  0x00007f6a364efa14 in ahci_port_write (val=<optimized out>,
> > offset=<optimized out>, port=0, s=0x7f6a3a609390)
> > 
> > This is 100% a guest-originated request to reset the AHCI device.
> 
> John, you think it's the same with bz1299876? right?
> 

Yes, the trace looks identical. If you see a crash happening that looks like this:

#0  ???
#1  0x00007f6a365dba7a in bdrv_aio_cancel_async (acb=0x7f6a3c2dd4e0) at block/io.c:2060
#2  bdrv_aio_cancel (acb=0x7f6a3c2dd4e0) at block/io.c:2041
#3  0x00007f6a365d0b15 in blk_aio_cancel (acb=<optimized out>) at block/block-backend.c:1044
#4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>, s=<optimized out>) at hw/ide/ahci.c:599

i.e.

#0: [N/A]
#1: bdrv_aio_cancel_sync
#2: bdrv_aio_cancel
#3: blk_aio_cancel
#4: ahci_reset_port

I am certain that this is BZ1299876. There are multiple ways this could be triggered:

(1) As in the QE report for 1299876, if you manually reset the device after a HALT, or
(2) If there is some problem with HALT not working correctly and the guest itself attempts to reset the device after a failed request.

#1 is fixed as of qemu 2.7.0 (It is still a known problem in the 2.6.0 variants but was fixed in 2.6.1 upstream)

#2 should be fixed as of qemu 2.4.0; if you are seeing #2 in 2.3.0 this is not particularly alarming or worrying as this was before a lot of sata/ahci/ncq fixes.

> > 
> > However, I am not having luck reproducing this. The halt is working for me.
> > 
> > ===
> > 
> > jhuston@scv ((qemu-kvm-rhev-2.6.0-23.el7)) ~/s/q/b/git>
> > ./x86_64-softmmu/qemu-system-x86_64 \
> >                                                          -enable-kvm \
> >                                                          -M q35 \
> >                                                          -m 4096 \
> >                                                          -cpu host \
> >                                                          -smp
> > 2,sockets=2,cores=1,threads=1 \
> >                                                          -drive
> > if=none,id=vda,file=/media/ext/img/f24.qcow2 \
> >                                                          -device
> > virtio-blk,drive=vda,bootindex=1 \
> >                                                          -drive
> > if=none,id=sda,file=/dev/loop1,rerror=stop,werror=stop,format=qcow2 \
> >                                                          -device
> > ide-hd,drive=sda \
> >                                                          -qmp
> > tcp::4444,server,nowait \
> >                                                          -monitor vc
> > 
> > Here, "vda" is a standard virtio disk hosting a Fedora 24 guest.
> > "sda" is a /dev/loop1 device that was setup as such:
> > 
> > > touch /media/ext/lo/lo.1g
> > > truncate --size=1073741824 /media/ext/lo/lo.1g
> > > losetup -f /media/ext/lo/lo.1g
> > > qemu-img create -f qcow2 /dev/loop1 10G
> > 
> > So we have a disk that is 1G long at best, but pretends to be 10G.
> > Now, when I boot the fedora guest, I do this:
> > 
> > # dd if=/dev/zero of=/dev/sda bs=1M count=8192
> > 
> > I see this in my QMP stream:
> > 
> > {u'timestamp': {u'seconds': 1473282362, u'microseconds': 442612}, u'data':
> > {u'nospace': True, u'action': u'stop', u'reason': u'No space left on
> > device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason':
> > u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
> > {u'timestamp': {u'seconds': 1473282362, u'microseconds': 442726}, u'data':
> > {u'nospace': True, u'action': u'stop', u'reason': u'No space left on
> > device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason':
> > u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
> > {u'timestamp': {u'seconds': 1473282362, u'microseconds': 442966}, u'event':
> > u'STOP'}
> > 
> > 
> > I see a few errors (not surprising, NCQ runs multiple commands in parallel)
> > and then a STOP command. no crash, no reset requests.
> 
> And I checked the bz with above steps
> 1.Can reproduce the bz on qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64
> 
> 2. Tested it on qemu-kvm-rhev-2.6.0-22.el7.x86_64
> 1)Boot guest with following command:
> /usr/libexec/qemu-kvm \
> -M pc \
> -cpu SandyBridge \
> -nodefaults -rtc base=utc \
> -m 4G \
> -smp 2,sockets=2,cores=1,threads=1 \
> -enable-kvm \
> -name rhel7.3 \
> -k en-us \
> -nodefaults \
> -serial unix:/tmp/serial0,server,nowait \
> -boot menu=on \
> -qmp tcp:0:6666,server,nowait \
> -vga qxl \
> -vnc :2 \
> -drive
> file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,
> werror=stop,rerror=stop \
> -device
> virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> \
> -device ahci,id=ahci0 \
> -drive
> file=/dev/loop0,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,
> rerror=stop \
> -device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1i,bus=ahci0.0 \
> -monitor stdio \
> 
> 2.In guest
> # dd if=/dev/zero of=/dev/sda bs=1M count=8192
> 
> 3. (qemu) info status
> VM status: paused (io-error)
> 

So you didn't see a _crash_ on _2.6.x_?
I think that's good news; does that mean you only see the crash on 2.3.x?

> 4.enlarge the size with truncate -s 10737418240 lo.1g
> check the size
> -rw-r--r--. 1 root root 10737418240 Sep  8 13:02 lo.1g
> 

Sorry if I misled you, I don't think this is sufficient to rest the ability to resume.
I think the loopback device gets a fixed size when you run losetup.
If you try to resume like this it's just going to re-HALT immediately.

Either use a proper growable target (lvm, iscsi, whatever is convenient for you, I'm sorry -- this is cumbersome to me, too) or you can migrate to a second, bigger loopback:

# create /dev/loop2 with a bigger backing file, e.g. 4G
# issue a QMP drive-mirror command while stopped to pivot to the bigger drive:
drive-mirror device=sda target=/dev/loop2 sync=full
# Observe the job is "ready" and issue completion:
{u'timestamp': {u'seconds': 1473359282, u'microseconds': 87705}, u'data': {u'device': u'sda', u'type': u'mirror', u'speed': 0, u'len': 535691264, u'offset': 535691264}, u'event': u'BLOCK_JOB_READY'}
(QEMU) block-job-complete device=sda
{
    "execute": "block-job-complete", 
    "arguments": {
        "device": "sda"
    }
}
{
    "return": {}
}
(QEMU) 
{u'timestamp': {u'seconds': 1473359387, u'microseconds': 265624}, u'data': {u'device': u'sda', u'type': u'mirror', u'speed': 0, u'len': 535691264, u'offset': 535691264}, u'event': u'BLOCK_JOB_COMPLETED'}


Now we should be on a bigger drive with more capacity.

Continue reading below for results...


> 5. In hmp 
> (qemu) c
> (qemu) info status
> VM status: paused (io-error)
> 
> So I want to confirm with you about the step 4 and step 5, are they right?
> is it the expected result?
> 

This is possibly the result of a race condition wherein the VM receives more than one ENOSPC and pauses "multiple times."

[Let me investigate this. I will follow-up.]

Here's a case where the VM paused just once:

{u'timestamp': {u'seconds': 1473358448, u'microseconds': 551052}, u'data': {u'device': u'sda', u'action': u'stop', u'reason': u'No space left on device', u'operation': u'write', u'nospace': True}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473358448, u'microseconds': 551273}, u'event': u'STOP'}
(QEMU) 


Here's what happens when I resume without fixing the space error:

(QEMU) human-monitor-command command-line=c
{
    "execute": "human-monitor-command", 
    "arguments": {
        "command-line": "c"
    }
}
{
    "return": ""
}
(QEMU) 
{u'timestamp': {u'seconds': 1473358536, u'microseconds': 333416}, u'event': u'RESUME'}
{u'timestamp': {u'seconds': 1473358536, u'microseconds': 333630}, u'data': {u'device': u'sda', u'action': u'stop', u'reason': u'No space left on device', u'operation': u'write', u'nospace': True}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473358536, u'microseconds': 333805}, u'event': u'STOP'}

i.e. just one BLOCK_IO_ERROR and one STOP event.

Lastly, here's what happens when I fix the space issue by mirroring to a bigger drive and then resuming:

(QEMU) human-monitor-command command-line=c
{
    "execute": "human-monitor-command", 
    "arguments": {
        "command-line": "c"
    }
}
{
    "return": ""
}
(QEMU) 
{u'timestamp': {u'seconds': 1473359437, u'microseconds': 164602}, u'event': u'RESUME'}


And now I don't expect to see another HALT for a while. (Of course, the disk may fill up again if we keep dd running.)


> Thanks
> Jing

Comment 17 John Snow 2016-09-08 19:28:03 UTC
I tried again, trying to coax out multiple concurrent failures.

jhuston@scv /m/e/lo> ls -lah
-rw-rw-r--.  1 jhuston jhuston 1.1G Sep  8 15:18 1g.lo
-rw-rw-r--.  1 jhuston jhuston 2.0G Sep  7 15:49 2g.lo

(Note that the '1g' file here is actually 1.1g, 1147483648 bytes exactly -- why? I don't know, but the uneven sized helped to coax out the concurrent IO failures.)

jhuston@scv /m/e/lo> losetup
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE           DIO
/dev/loop0         0      0         0  0 /media/ext/lo/1g.lo   0
/dev/loop1         0      0         0  0 /media/ext/lo/4g.lo   0


jhuston@scv ((qemu-kvm-rhev-2.6.0-23.el7)) ~/s/q/b/git> ./x86_64-softmmu/qemu-system-x86_64 \
                                                        -enable-kvm \
                                                        -M q35 \
                                                        -m 4096 \
                                                        -cpu host \
                                                        -smp 2,sockets=2,cores=1,threads=1 \
                                                        -drive if=none,id=vda,file=/media/ext/img/f24.qcow2 \
                                                        -device virtio-blk,drive=vda,bootindex=1 \
                                                        -drive if=none,id=sda,file=/dev/loop0,rerror=stop,werror=stop,format=qcow2 \
                                                        -device ide-hd,drive=sda,bus=ide.0 \
                                                        -qmp tcp::4444,server,nowait \
                                                        -monitor vc


Then, booting the guest, I invoke the DD command:

# dd if=/dev/zero of=/dev/sda bs=1M count=3000

And we get a lot of failures at once:

(QEMU) 
{u'timestamp': {u'seconds': 1473362329, u'microseconds': 149070}, u'data': {u'nospace': True, u'action': u'stop', u'reason': u'No space left on device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason': u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473362329, u'microseconds': 149173}, u'data': {u'nospace': True, u'action': u'stop', u'reason': u'No space left on device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason': u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473362329, u'microseconds': 149209}, u'data': {u'nospace': True, u'action': u'stop', u'reason': u'No space left on device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason': u'enospc'}, u'event': u'BLOCK_IO_ERROR'}
{u'timestamp': {u'seconds': 1473362329, u'microseconds': 149365}, u'event': u'STOP'}
{u'timestamp': {u'seconds': 1473362329, u'microseconds': 149394}, u'data': {u'nospace': True, u'action': u'stop', u'reason': u'No space left on device', u'device': u'sda', u'operation': u'write', u'__com.redhat_reason': u'enospc'}, u'event': u'BLOCK_IO_ERROR'}


OK, let's migrate to the 4G device:

(QEMU) 
(QEMU) drive-mirror device=sda target=/dev/loop1 mode=existing sync=full
{
    "execute": "drive-mirror", 
    "arguments": {
        "device": "sda", 
        "sync": "full", 
        "target": "/dev/loop1", 
        "mode": "existing"
    }
}
{
    "return": {}
}
(QEMU) 
{u'timestamp': {u'seconds': 1473362592, u'microseconds': 982845}, u'data': {u'device': u'sda', u'type': u'mirror', u'speed': 0, u'len': 1147011072, u'offset': 1147011072}, u'event': u'BLOCK_JOB_READY'}
(QEMU) block-job-complete device=sda
{
    "execute": "block-job-complete", 
    "arguments": {
        "device": "sda"
    }
}
{
    "return": {}
}
(QEMU) 
{u'timestamp': {u'seconds': 1473362608, u'microseconds': 253082}, u'data': {u'device': u'sda', u'type': u'mirror', u'speed': 0, u'len': 1147011072, u'offset': 1147011072}, u'event': u'BLOCK_JOB_COMPLETED'}


Alright, now let's try resuming the guest:

(QEMU) 
(QEMU) human-monitor-command command-line=c
{
    "execute": "human-monitor-command", 
    "arguments": {
        "command-line": "c"
    }
}
{
    "return": ""
}
(QEMU) 
{u'timestamp': {u'seconds': 1473362637, u'microseconds': 614318}, u'event': u'RESUME'}
(QEMU) 



Works OK with just a single resume even after many failures:

3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB, 2.9 GiB) copied, 25.0459 s, 126 MB/s



So as far as I can tell everything is working as intended; but please correct me if I have misunderstood anything. Thank you for your patience! :)
--js

Comment 18 jingzhao 2016-09-09 05:51:43 UTC
Hi John

   Thanks for your help, summarized the test result, please check the following comments

   Verified the bz on host-kernel-3.10.0-503.el7.x86_64 and qemu-kvm-rhev-2.6.0-23.el7.x86_64

Following is the detailed steps

1. prepare the enviroment
[root@jinzhao bug]# touch lo.1g
[root@jinzhao bug]# truncate --size=1073741824 lo.1g 
[root@jinzhao bug]# touch lo.10g
[root@jinzhao bug]# truncate --size=10737418240 lo.10g
[root@jinzhao bug]# ls -lha lo.1g lo.10g
-rw-r--r--. 1 root root  10G Sep  9 10:36 lo.10g
-rw-r--r--. 1 root root 1.0G Sep  9 10:29 lo.1g
[root@jinzhao bug]# losetup 
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /home/bug/lo.1g
/dev/loop1         0      0         0  0 /home/bug/lo.10g

2.Boot guest with /dev/loop0
3.In guest, dd if=/dev/zero of=/dev/sda bs=1M count=300
4.Check the test result
(qemu) info status
VM status: paused (io-error)

{"execute": "drive-mirror", "arguments": {"device": "drive-virtio-disk1", "sync": "full", "target": "/dev/loop1", "mode": "existing"}}
{"return": {}}
{"timestamp": {"seconds": 1473390863, "microseconds": 212768}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-virtio-disk1", "len": 1073348608, "offset": 1073348608, "speed": 0, "type": "mirror"}}
{"execute": "block-job-complete", "arguments": {"device": "drive-virtio-disk1"}}
{"return": {}}
{"timestamp": {"seconds": 1473390883, "microseconds": 955663}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk1", "len": 1073348608, "offset": 1073348608, "speed": 0, "type": "mirror"}}

(qemu) c
(qemu) info status
VM status: running

the working of dd can finish and guest work normally

ps: following is the qemu command which I used (pc machine type and q35 machine type)

[root@jinzhao home]# cat /home/bug/887844sh 
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge \
-nodefaults -rtc base=utc \
-m 4G \
-smp 2,sockets=2,cores=1,threads=1 \
-enable-kvm \
-name rhel7.3 \
-k en-us \
-nodefaults \
-serial unix:/tmp/serial0,server,nowait \
-boot menu=on \
-qmp tcp:0:6666,server,nowait \
-vga qxl \
-vnc :2 \
-drive file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop \
-device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-device ahci,id=ahci0 \
-drive file=/dev/loop0,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,rerror=stop \
-device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1i,bus=ahci0.0 \
-device virtio-net-pci,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
-monitor stdio \
[root@jinzhao home]# cat /home/bug/887844-2sh 
/usr/libexec/qemu-kvm \
-M q35 \
-cpu SandyBridge \
-nodefaults -rtc base=utc \
-m 4G \
-smp 2,sockets=2,cores=1,threads=1 \
-enable-kvm \
-name rhel7.3 \
-k en-us \
-nodefaults \
-serial unix:/tmp/serial0,server,nowait \
-boot menu=on \
-qmp tcp:0:6666,server,nowait \
-vga qxl \
-vnc :2 \
-drive file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop \
-device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-drive file=/mnt/test.qcow2,if=none,id=drive-virtio-disk1,format=qcow2,werror=stop,rerror=stop \
-device ide-hd,drive=drive-virtio-disk1,id=virtio-disk1i \
-device virtio-net-pci,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
-monitor stdio \

Comment 19 jingzhao 2016-09-09 06:10:02 UTC
(In reply to John Snow from comment #16)
> (In reply to jingzhao from comment #15)
> > (In reply to John Snow from comment #14)
> > > Q1) What guest are you testing?
> > ---Rhel guest 7.3
> > > Q2) How reproducible is the failure?
> > 
> > 3/3
> > 
> 
> 3/3 it segfaults, or 3/3 it doesn't appear to halt/resume correctly?

----I mean I hit the segfautls 3/3 

> 
> I am trying to determine if the RHEV 2.6.0 based QEMU crashes on ENOSPC or
> not. From what I can tell from your reply here, it does not, but there may
> be issues with resume appearing to do nothing.
> 
> Am I correct?


----I am not sure, I used qemu-kvm-rhev-2.6.0-23.el7.x86_64 when I hit the segfualt issue. 

> 
> > 
> > > 
> > > The crash is related to a RESET request occurring after we have halted one
> > > or more write requests (BZ #1299876), the trace provided is unmistakable:
> > > 
> > > #4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>,
> > > s=<optimized out>) at hw/ide/ahci.c:599
> > > #5  0x00007f6a364efa14 in ahci_port_write (val=<optimized out>,
> > > offset=<optimized out>, port=0, s=0x7f6a3a609390)
> > > 
> > > This is 100% a guest-originated request to reset the AHCI device.
> > 
> > John, you think it's the same with bz1299876? right?
> > 
> 
> Yes, the trace looks identical. If you see a crash happening that looks like
> this:
> 
> #0  ???
> #1  0x00007f6a365dba7a in bdrv_aio_cancel_async (acb=0x7f6a3c2dd4e0) at
> block/io.c:2060
> #2  bdrv_aio_cancel (acb=0x7f6a3c2dd4e0) at block/io.c:2041
> #3  0x00007f6a365d0b15 in blk_aio_cancel (acb=<optimized out>) at
> block/block-backend.c:1044
> #4  0x00007f6a364eea9d in ahci_reset_port (port=<optimized out>,
> s=<optimized out>) at hw/ide/ahci.c:599
> 
> i.e.
> 
> #0: [N/A]
> #1: bdrv_aio_cancel_sync
> #2: bdrv_aio_cancel
> #3: blk_aio_cancel
> #4: ahci_reset_port
> 
> I am certain that this is BZ1299876. There are multiple ways this could be
> triggered:
> 
> (1) As in the QE report for 1299876, if you manually reset the device after
> a HALT, or
> (2) If there is some problem with HALT not working correctly and the guest
> itself attempts to reset the device after a failed request.
> 
> #1 is fixed as of qemu 2.7.0 (It is still a known problem in the 2.6.0
> variants but was fixed in 2.6.1 upstream)
> 
> #2 should be fixed as of qemu 2.4.0; if you are seeing #2 in 2.3.0 this is
> not particularly alarming or worrying as this was before a lot of
> sata/ahci/ncq fixes.
> 

Checked #2 and I think I used the (2) to trigger the call trace because exactly I didn't do "system_reset". So I think it is different although the call trace are same with you methioned above.

0  0x00007fdcfd9c7720 in ?? ()
#1  0x00007fdcfbe6ae7a in bdrv_aio_cancel_async (acb=0x7fdcfd9c65b0) at block/io.c:2060
#2  bdrv_aio_cancel (acb=0x7fdcfd9c65b0) at block/io.c:2041
#3  0x00007fdcfbe5feb5 in blk_aio_cancel (acb=<optimized out>) at block/block-backend.c:1044
#4  0x00007fdcfbd7dd5d in ahci_reset_port (port=<optimized out>, s=<optimized out>) at hw/ide/ahci.c:599
#5  0x00007fdcfbd7ecd4 in ahci_port_write (val=<optimized out>, offset=<optimized out>, port=5, s=0x7fdcfefa3390) at hw/ide/ahci.c:301
.................. 

And I checked what is different between you and me.
1) machine type, q35 machine type you used but pc machine type I used
2) the method which create I/O error, i listed the method which I create
a. create 1G file through qemu-img
b. bind it to iscsi server
c. mount the lun device to the client
Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 8388608 bytes
....
/dev/sdc on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
d. create 10G disk through qemu-img on the lun disk
[root@jinzhao mnt]# qemu-img info test.qcow2 
image: test.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 982M
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
e.guest used the "test.qcow2"

I modified the machine type and also hit the segfault issue.

So I think we can verfied the bz according to comment 18 and file a new bz for tracking the call trace, are you agree? any comments for the bz?

Thanks
Jing Zhao

Comment 21 jingzhao 2016-09-14 00:40:28 UTC
According to comment 18, moved the bz to verified, for the core dump issue, please check bz1375520

Comment 22 John Snow 2016-09-14 21:22:51 UTC
Closing needinfo as we have migrated to a new BZ.

Comment 24 errata-xmlrpc 2016-11-07 20:09:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.