Bug 1703793 - Qemu core dump when set block-mirror speed after target gluster server volume stopped
Summary: Qemu core dump when set block-mirror speed after target gluster server volume...
Status: CLOSED DUPLICATE of bug 1709791
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Max Reitz
QA Contact: aihua liang
URL:
Whiteboard:
Keywords:
: 1589627 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-28 11:41 UTC by aihua liang
Modified: 2019-06-25 17:15 UTC (History)
7 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2019-06-25 17:15:38 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3481211 Troubleshoot None RHV: Virtual Machines fail to start and migrate with "Failed to lock byte 100" 2019-06-14 02:22 UTC

Description aihua liang 2019-04-28 11:41:01 UTC
Description of problem:
  Qemu core dump when set block-mirror speed after target gluster server volume stopped

Version-Release number of selected component (if applicable):
  kernel version:3.10.0-1037.el7.x86_64
  qemu-kvm-rhev version:qemu-kvm-rhev-2.12.0-27.el7.x86_64
  gluster version on server:glusterfs-server-6.0-2.el7rhgs.x86_64
  gluster version on client:glusterfs-6.0-2.el7.x86_64

How reproducible:
 100%

Steps to Reproduce:
1.Mount gluster storage as mirror target
  mount.glusterfs intel-5405-32-2.englab.nay.redhat.com:/aliang /mnt/aliang

2.Start guest with cmds:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190423-215834-BzwOjODj,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190423-215834-BzwOjODj,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idapUGH0  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190423-215834-BzwOjODj,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190423-215834-BzwOjODj,path=/var/tmp/seabios-20190423-215834-BzwOjODj,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190423-215834-BzwOjODj,iobase=0x402 \
    -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -object iothread,id=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel77-64-virtio.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=0x4,iothread=iothread0 \
    -drive id=drive_data1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/test.img \
    -device virtio-blk-pci,id=data1,drive=drive_data1,bus=pci.0,addr=0x6,iothread=iothread0 \
    -device virtio-net-pci,mac=9a:84:85:86:87:88,id=idc38p8G,vectors=4,netdev=idFM5N3v,bus=pci.0,addr=0x5  \
    -netdev tap,id=idFM5N3v,vhost=on \
    -m 2048  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Westmere',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

3.In guest, format data disk and write data on it
  (guest)#mkfs.ext4 /dev/vdb
  (guest)#dd if=/dev/urandom of=/dev/vdb bs=1M count=900 oflag=direct

4.Do mirror to target
 { "execute": "drive-mirror", "arguments": { "device": "drive_data1", "target": "/mnt/aliang/mirror", "format": "qcow2", "mode": "absolute-paths", "sync": "full","speed": 100}}
 {"execute":"query-block-jobs"}
 {"return": [{"auto-finalize": true, "io-status": "ok", "device": "drive_data1", "auto-dismiss": true, "busy": false, "len": 943783936, "offset": 16777216, "status": "running", "paused": false, "speed": 100, "ready": false, "type": "mirror"}]}

5.Stop gluster volume
  (gluster server)#gluster volume stop aliang

6.Check block job status in gluster client.
  {"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "drive_data1", "auto-dismiss": true, "busy": false, "len": 943783936, "offset": 16777216, "status": "running", "paused": false, "speed": 100, "ready": false, "type": "mirror"}]}

7.Rest mirror job set to 0.
  { "execute": "block-job-set-speed", "arguments": { "device": "drive_data1","speed":0}}
{"return": {}}
{"timestamp": {"seconds": 1556449564, "microseconds": 824031}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 825648}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 829394}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 833172}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 836874}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 844250}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 847903}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 851676}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 859841}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 868827}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 872458}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 875860}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 879810}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 883402}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 892937}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 895742}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_data1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556449564, "microseconds": 895961}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "drive_data1"}}
  


Actual results:
After the mirror job failed, qemu core dump with info:
  Unexpected error in raw_reconfigure_getfd() at block/file-posix.c:903:
qemu-kvm: Could not reopen file: Transport endpoint is not connected
aliang_le.txt: line 33: 11486 Aborted                 (core dumped) /usr/libexec/qemu-kvm -name 'avocado-vt-vm1' -machine pc -nodefaults -device VGA,bus=pci.0,addr=0x2 -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190423-215834-BzwOjODj,server,nowait -mon chardev=qmp_id_qmpmonitor1,mode=control -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190423-215834-BzwOjODj,server,nowait -mon chardev=qmp_id_catch_monitor,mode=control -device pvpanic,ioport=0x505,id=idapUGH0 -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190423-215834-BzwOjODj,server,nowait -device isa-serial,chardev=serial_id_serial0 ...

gdb info as bellow:
  (gdb) bt
#0  0x00007f5a763a3337 in raise () at /lib64/libc.so.6
#1  0x00007f5a763a4a28 in abort () at /lib64/libc.so.6
#2  0x0000561d4cfda8e4 in error_handle_fatal (errp=<optimized out>, err=0x561d51390e40) at util/error.c:38
#3  0x0000561d4cfda9bd in error_setv (errp=0x561d4dc617e0 <error_abort>, src=0x561d4d152b2f "block/file-posix.c", line=903, func=0x561d4d153530 <__func__.26763> "raw_reconfigure_getfd", err_class=ERROR_CLASS_GENERIC_ERROR, fmt=<optimized out>, ap=0x7ffcda481cc0, suffix=0x7f5a764f8150 "Transport endpoint is not connected") at util/error.c:71
#4  0x0000561d4cfdac52 in error_setg_errno_internal (errp=errp@entry=0x561d4dc617e0 <error_abort>, src=src@entry=0x561d4d152b2f "block/file-posix.c", line=line@entry=903, func=func@entry=0x561d4d153530 <__func__.26763> "raw_reconfigure_getfd", os_errno=107, fmt=fmt@entry=0x561d4d152c82 "Could not reopen file") at util/error.c:111
#5  0x0000561d4cf43f14 in raw_reconfigure_getfd (bs=bs@entry=0x561d504af400, flags=<optimized out>, open_flags=open_flags@entry=0x7ffcda481e04, perm=perm@entry=11, force_dup=force_dup@entry=false, errp=errp@entry=0x561d4dc617e0 <error_abort>) at block/file-posix.c:903
#6  0x0000561d4cf44055 in raw_check_perm (bs=0x561d504af400, perm=11, shared=21, errp=0x561d4dc617e0 <error_abort>) at block/file-posix.c:2561
#7  0x0000561d4cef9a1f in bdrv_check_perm (bs=bs@entry=0x561d504af400, q=0x0, 
    q@entry=0x69bf6bad529d9000, cumulative_perms=11, cumulative_shared_perms=21, ignore_children=ignore_children@entry=0x561d50184390 = {...}, errp=errp@entry=0x561d4dc617e0 <error_abort>) at block.c:1751
#8  0x0000561d4cef987b in bdrv_check_update_perm (bs=0x561d504af400, q=0x69bf6bad529d9000, 
    q@entry=0x0, new_used_perm=new_used_perm@entry=11, new_shared_perm=new_shared_perm@entry=21, ignore_children=ignore_children@entry=0x561d50184390 = {...}, errp=errp@entry=0x561d4dc617e0 <error_abort>) at block.c:1937
#9  0x0000561d4cef9b0c in bdrv_check_perm (errp=0x561d4dc617e0 <error_abort>, ignore_children=0x561d50184390 = {...}, shared=21, perm=11, q=0x0, c=0x561d52190be0) at block.c:1950
#10 0x0000561d4cef9b0c in bdrv_check_perm (bs=0x561d504b2800, bs@entry=0xb, q=0x0, 
    q@entry=0x69bf6bad529d9000, cumulative_perms=0, cumulative_shared_perms=31, ignore_children=ignore_children@entry=0x561d501842b0 = {...}, errp=0x561d4dc617e0 <error_abort>, 
    errp@entry=0x15) at block.c:1767
#11 0x0000561d4cef987b in bdrv_check_update_perm (bs=0xb, q=0x69bf6bad529d9000, 
    q@entry=0x0, new_used_perm=new_used_perm@entry=0, new_shared_perm=new_shared_perm@entry=31, ignore_children=ignore_children@entry=0x561d501842b0 = {...}, errp=0x15, 
    errp@entry=0x561d4dc617e0 <error_abort>) at block.c:1937
#12 0x0000561d4cefa08f in bdrv_child_try_set_perm (errp=0x561d4dc617e0 <error_abort>, ignore_children=0x561d501842b0 = {...}, shared=31, perm=0, q=0x0, c=0x561d52191680)
    at block.c:1950
#13 0x0000561d4cefa08f in bdrv_child_try_set_perm (c=0x561d52191680, perm=0, shared=31, errp=0x561d4dc617e0 <error_abort>) at block.c:1978
#14 0x0000561d4cf3d42d in blk_set_perm (blk=0x561d501b38c0, perm=perm@entry=0, shared_perm=shared_perm@entry=31, errp=errp@entry=0x561d4dc617e0 <error_abort>)
    at block/block-backend.c:822
#15 0x0000561d4cf472b5 in mirror_exit_common (job=0x561d501b3b80) at block/mirror.c:521
#16 0x0000561d4cf47f29 in mirror_abort (job=<optimized out>) at block/mirror.c:603
#17 0x0000561d4cf01cf2 in job_finalize_single (job=<optimized out>) at job.c:655
#18 0x0000561d4cf01cf2 in job_finalize_single (job=0x561d501b3b80) at job.c:676
#19 0x0000561d4cf0267a in job_completed_txn_abort (job=0x561d501b3b80) at job.c:754
#20 0x0000561d4cf029d0 in job_exit (opaque=0x561d501b3b80) at job.c:869
#21 0x0000561d4cfd2301 in aio_bh_poll (bh=0x561d51391290) at util/async.c:90
#22 0x0000561d4cfd2301 in aio_bh_poll (ctx=ctx@entry=0x561d500bf7c0) at util/async.c:118
#23 0x0000561d4cfd53b0 in aio_dispatch (ctx=0x561d500bf7c0) at util/aio-posix.c:440
#24 0x0000561d4cfd21de in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:261
#25 0x00007f5a8f29d049 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#26 0x0000561d4cfd46a7 in main_loop_wait () at util/main-loop.c:215
#27 0x0000561d4cfd46a7 in main_loop_wait (timeout=<optimized out>) at util/main-loop.c:238
#28 0x0000561d4cfd46a7 in main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:497
#29 0x0000561d4cc750e7 in main () at vl.c:1964
#30 0x0000561d4cc750e7 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4782


Expected results:
 Qemu should not core dump.

Additional info:
 Attachment is the coredump file.

Comment 2 aihua liang 2019-04-28 11:47:32 UTC
Coredump file is too big to attach, provide its storage link:
  10.73.194.27:/vol/s2coredump/bug1703793/core.11486

Comment 5 aihua liang 2019-04-30 08:52:37 UTC
Verified it with -blockdev on RHEL8, don't hit the core dump issue, but some failed info given when quit vm.(see step 7 ~ 9)

 qemu-kvm version: qemu-kvm-3.1.0-24.module+el8.0.1+3117+9f83299e.x86_64
 
Test steps:
 1.Mount gluster storage as target:
    mount.gluster dhcp-8-210.nay.redhat.com:/bob /mnt/aliang 
 
 2.Start guest with qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190423-215834-BzwOjODj,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190423-215834-BzwOjODj,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idapUGH0  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190423-215834-BzwOjODj,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190423-215834-BzwOjODj,path=/var/tmp/seabios-20190423-215834-BzwOjODj,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190423-215834-BzwOjODj,iobase=0x402 \
    -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -blockdev driver=file,node-name=file_node,filename=/home/kvm_autotest_root/images/rhel801-64-virtio-scsi.qcow2 \
    -blockdev driver=qcow2,file=file_node,node-name=drive_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=0x4 \
    -blockdev driver=file,node-name=file_data,filename=/home/data.qcow2 \
    -blockdev driver=qcow2,file=file_data,node-name=drive_data1 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,bus=pci.0,addr=0x6 \
    -device virtio-net-pci,mac=9a:84:85:86:87:88,id=idc38p8G,vectors=4,netdev=idFM5N3v,bus=pci.0,addr=0x5  \
    -netdev tap,id=idFM5N3v,vhost=on \
    -m 7168  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

 3.Add gluster target node:
     {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/mnt/aliang/sn1','size':1073741824},'job-id':'job1'}}
     {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/mnt/aliang/sn1'}}
     {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':1073741824},'job-id':'job2'}}
     {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}}
     {'execute':'job-dismiss','arguments':{'id':'job1'}}
     {'execute':'job-dismiss','arguments':{'id':'job2'}}

 4.In guest, write data to data disk
   (guest)#mkfs.ext4 /dev/vdb
          #dd if=/dev/urandom of=/dev/vdb bs=1M count=900 oflag=direct

 5.Do mirror to target node
    {"execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "sn1", "sync": "full", "job-id":"j1","speed":100}}
    {"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "j1", "auto-dismiss": true, "busy": false, "len": 943783936, "offset": 16777216, "status": "running", "paused": false, "speed": 100, "ready": false, "type": "mirror"}]}

 6.Stop gluster volume
   (gluster server)gluster volume stop bob

 7.Check block job info, then set its speed to 0
  {"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "j1", "auto-dismiss": true, "busy": false, "len": 943783936, "offset": 16777216, "status": "running", "paused": false, "speed": 100, "ready": false, "type": "mirror"}]}

{ "execute": "block-job-set-speed", "arguments": { "device": "j1", "speed":0}}
{"return": {}}
{"timestamp": {"seconds": 1556612397, "microseconds": 368313}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 388417}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 391112}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 412652}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 414832}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 415204}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 432395}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 434473}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 434892}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 456267}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 458164}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 458514}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 478629}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 480492}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 480862}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 501651}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 501850}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "j1"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 502100}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 943783936, "offset": 17432576, "speed": 0, "type": "mirror", "error": "Transport endpoint is not connected"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 502153}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1556612397, "microseconds": 502189}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}

(qemu) qemu-kvm: warning: Failed to unlock byte 200
qemu-kvm: warning: Failed to unlock byte 200
qemu-kvm: warning: Failed to unlock byte 200

 8.Check block job info:
   {"execute":"query-block-jobs"}
{"return": []}

 9.Quit vm
  (qemu) quit
qemu-kvm: Failed to flush the L2 table cache: Input/output error
qemu-kvm: Failed to flush the refcount block cache: Input/output error
qemu-kvm: warning: Failed to unlock byte 100


Additional info:
  When i test with -blockdev on RHEL7, the same issue with that in RHEL8 exist.

So, Ademar
  Do i need to report a new bug that described in this comment on RHEL8?


aliang

Comment 6 Max Reitz 2019-05-03 15:07:57 UTC
The warnings are completely correct and normal.  The target file is unavailable after all.

It only works with -blockdev because auto-read-only is what results in the core dump.  This option defaults to true for -drive, but to false with -blockdev.  By specifying auto-read-only=on, it fails even with -blockdev for me (upstream master).

Max

Comment 9 Max Reitz 2019-06-03 11:56:23 UTC
As I wrote in comment 6, it does fail with -blockdev if you specify auto-read-only=on.

Max

Comment 11 aihua liang 2019-06-04 02:08:09 UTC
(In reply to Max Reitz from comment #9)
> As I wrote in comment 6, it does fail with -blockdev if you specify
> auto-read-only=on.
> 
> Max

Oh, see it, i only concerned on the default status.

Comment 12 Tingting Mao 2019-06-14 02:22:51 UTC
*** Bug 1589627 has been marked as a duplicate of this bug. ***

Comment 13 Max Reitz 2019-06-25 17:15:38 UTC
I’m closing this BZ because we already have BZ 1709791 for RHEL 8 AV (which existed before this one here was moved to RHEL 8, and which shows a simpler reproducer), and now we have BZ 1722090 for RHEL 7’s qemu-kvm-rhev.  I do think this is something we should consider fixing in RHEL 7 still.

Max

*** This bug has been marked as a duplicate of bug 1709791 ***


Note You need to log in before you can comment on or make changes to this bug.