RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1314591 - Hot-unplug disk failed
Summary: Hot-unplug disk failed
Keywords:
Status: CLOSED DUPLICATE of bug 1362084
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Markus Armbruster
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1086603
Blocks: 963588
TreeView+ depends on / blocked
 
Reported: 2016-03-04 01:46 UTC by jingzhao
Modified: 2016-09-29 11:43 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1086603
Environment:
Last Closed: 2016-09-29 11:42:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 2 Marcel Apfelbaum 2016-04-04 12:40:46 UTC
Hi,

I didn't succeed to reproduce with one of the latest builds:  qemu-kvm-rhev-2.5.0-1.el7.

I noticed you used qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64, however qemu-kvm-rhev-2.5.0+ should be used for RHEL 7.3

Please try with the latest QEMU build and check if the problem is solved.
Thanks,
Marcel

Comment 3 jingzhao 2016-04-07 02:42:58 UTC
(In reply to Marcel Apfelbaum from comment #2)
> Hi,
> 
> I didn't succeed to reproduce with one of the latest builds: 
> qemu-kvm-rhev-2.5.0-1.el7.
> 
> I noticed you used qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64, however
> qemu-kvm-rhev-2.5.0+ should be used for RHEL 7.3
> 
> Please try with the latest QEMU build and check if the problem is solved.
> Thanks,
> Marcel

Always hit the issue with latest 7.3 version
kernel-3.10.0-373.el7.x86_64
qemu-img-rhev-2.5.0-4.el7.x86_64
seabios-1.9.1-2.el7.x86_64

Thanks
Jing Zhao

Comment 4 Marcel Apfelbaum 2016-08-08 15:13:44 UTC
(In reply to jingzhao from comment #3)
> (In reply to Marcel Apfelbaum from comment #2)
> > Hi,
> > 
> > I didn't succeed to reproduce with one of the latest builds: 
> > qemu-kvm-rhev-2.5.0-1.el7.
> > 
> > I noticed you used qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64, however
> > qemu-kvm-rhev-2.5.0+ should be used for RHEL 7.3
> > 
> > Please try with the latest QEMU build and check if the problem is solved.
> > Thanks,
> > Marcel
> 
> Always hit the issue with latest 7.3 version
> kernel-3.10.0-373.el7.x86_64
> qemu-img-rhev-2.5.0-4.el7.x86_64
> seabios-1.9.1-2.el7.x86_64
> 
> Thanks
> Jing Zhao

Hi,

I tried to reproduce with qemu-kvm-rhev-2.6.0-17.el7, but it seems the problem
is solved now.

Can you please approve?

Thanks,
Marcel

Comment 5 juzhang 2016-09-21 01:15:08 UTC
Hi Jing,

Could you have a look comment4?

Best Regards,
Junyi

Comment 6 jingzhao 2016-09-21 02:15:06 UTC
(In reply to Marcel Apfelbaum from comment #4)
> (In reply to jingzhao from comment #3)
> > (In reply to Marcel Apfelbaum from comment #2)
> > > Hi,
> > > 
> > > I didn't succeed to reproduce with one of the latest builds: 
> > > qemu-kvm-rhev-2.5.0-1.el7.
> > > 
> > > I noticed you used qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64, however
> > > qemu-kvm-rhev-2.5.0+ should be used for RHEL 7.3
> > > 
> > > Please try with the latest QEMU build and check if the problem is solved.
> > > Thanks,
> > > Marcel
> > 
> > Always hit the issue with latest 7.3 version
> > kernel-3.10.0-373.el7.x86_64
> > qemu-img-rhev-2.5.0-4.el7.x86_64
> > seabios-1.9.1-2.el7.x86_64
> > 
> > Thanks
> > Jing Zhao
> 
> Hi,
> 
> I tried to reproduce with qemu-kvm-rhev-2.6.0-17.el7, but it seems the
> problem
> is solved now.
> 
> Can you please approve?
> 
> Thanks,
> Marcel

Marcel, I tried on my side can reproduce on qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64 but blocked by another core dump issue


[root@jinzhao home]# uname -r
3.10.0-509.el7.x86_64
[root@jinzhao home]# rpm -qa |grep qemu-kvm-rhev
qemu-kvm-rhev-debuginfo-2.6.0-26.el7.x86_64
qemu-kvm-rhev-2.6.0-26.el7.x86_64
[root@jinzhao home]# rpm -qa |grep seabios
seabios-bin-1.9.1-4.el7.noarch

1. Boot guest with following command
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge \
-nodefaults -rtc base=utc \
-m 4G \
-smp 2,sockets=2,cores=1,threads=1 \
-enable-kvm \
-name rhel7 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-monitor stdio \
-serial unix:/tmp/serial0,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-vga std \
-vnc :0 \
-drive file=/home/bug/big.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-netdev tap,id=hostnet1,vhost=on \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=54:52:00:B6:40:22 \
-qmp tcp::8887,server,nowait \

2.On qmp side
{"timestamp": {"seconds": 1474423547, "microseconds": 204794}, "event": "NIC_RX_FILTER_CHANGED", "data": {"name": "net1", "path": "/machine/peripheral/net1/virtio-backend"}}
{ "execute": "blockdev-add", "arguments": {'options' : {'driver': 'raw', 'id':'drive-disk1', 'read-only': true, 'discard':'unmap', 'file': {'driver': 'file', 'filename': '/home/my-data-disk.raw'}, 'cache': { 'writeback': false, 'direct': true, 'no-flush': false }}} }
Connection closed by foreign host.

Following is the core dump info
(gdb) bt
#0  0x00007ff381e83e24 in visit_type_BlockdevRef (v=0x7ff384964a30, name=name@entry=0x7ff381efb07f "file", obj=0x7ff384944608, errp=errp@entry=0x7ffc4ac24120) at qapi-visit.c:2251
#1  0x00007ff381e842a2 in visit_type_BlockdevOptionsGenericFormat_members (v=<optimized out>, obj=<optimized out>, errp=0x7ffc4ac24140) at qapi-visit.c:1887
#2  0x00007ff381e83c35 in visit_type_BlockdevOptions_members (v=v@entry=0x7ff384964a30, obj=0x7ff3849445a0, errp=errp@entry=0x7ffc4ac24180) at qapi-visit.c:1633
#3  0x00007ff381e83d74 in visit_type_BlockdevOptions (v=0x7ff384964a30, name=name@entry=0x7ff381eeee7f "options", obj=obj@entry=0x7ffc4ac241e0, errp=errp@entry=0x7ffc4ac241c0)
    at qapi-visit.c:1657
#4  0x00007ff381e896f2 in visit_type_q_obj_blockdev_add_arg_members (v=<optimized out>, obj=obj@entry=0x7ffc4ac241e0, errp=errp@entry=0x0) at qapi-visit.c:12659
#5  0x00007ff381ce3a7b in qmp_marshal_blockdev_add (args=<optimized out>, ret=<optimized out>, errp=0x7ffc4ac24260) at qmp-marshal.c:530
#6  0x00007ff381c087f5 in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-2.6.0/monitor.c:3929
#7  0x00007ff381e97618 in json_message_process_token (lexer=0x7ff384943f08, input=0x7ff3849336c0, type=JSON_RCURLY, x=267, y=1) at qobject/json-streamer.c:105
#8  0x00007ff381eac0bb in json_lexer_feed_char (lexer=lexer@entry=0x7ff384943f08, ch=125 '}', flush=flush@entry=false) at qobject/json-lexer.c:310
#9  0x00007ff381eac17e in json_lexer_feed (lexer=0x7ff384943f08, buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:360
#10 0x00007ff381e976d9 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:124
#11 0x00007ff381c06dab in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-2.6.0/monitor.c:3945
#12 0x00007ff381cdb801 in tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x7ff3849edc20) at qemu-char.c:2897
#13 0x00007ff378c11d7a in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#14 0x00007ff381e08e00 in glib_pollfds_poll () at main-loop.c:213
#15 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:258
#16 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:506
#17 0x00007ff381bd680f in main_loop () at vl.c:1937
#18 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4693

Also tried the qemu-kvm-rhev-2.6.0-17.el7 and hit the same issue.

Thanks
Jing

Comment 7 Marcel Apfelbaum 2016-09-28 13:47:05 UTC
Hi Ademar,

Can someone from the block layer team have a look to the issue described in comment #6?

Thanks!
Marcel

Comment 8 Ademar Reis 2016-09-28 22:37:40 UTC
(In reply to Marcel Apfelbaum from comment #7)
> Hi Ademar,
> 
> Can someone from the block layer team have a look to the issue described in
> comment #6?
> 
> Thanks!
> Marcel

Kevin, the crash happened after a blockdev-add command via QMP. Please see if it rings any bell.

Comment 9 Kevin Wolf 2016-09-29 08:05:40 UTC
Definitely a QAPI error handling problem. It's fixed upstream, where I get the
right error message instead:

{"error": {"class": "GenericError", "desc": "QMP input object member 'writeback' is unexpected"}}

Comment 10 Markus Armbruster 2016-09-29 11:24:33 UTC
Suspect this is a duplicate of bug 1362084.

Comment 11 Markus Armbruster 2016-09-29 11:42:15 UTC
Simplified reproducer:

1. Run qemu-kvm with a QMP monitor on stdio:

    $ qemu-kvm -nodefaults -S -qmp stdio

2. Send two QMP commands:

    { "execute": "qmp_capabilities" }
    { "execute": "blockdev-add", "arguments": {'options' : {'driver': 'raw', 'id':'drive-disk1', 'read-only': true, 'discard':'unmap', 'file': {'driver': 'file', 'filename': '/home/my-data-disk.raw'}, 'cache': { 'writeback': false, 'direct': true, 'no-flush': false } } } }

Expected result:

    [Usual startup I/O...]
    { "execute": "blockdev-add", "arguments": {'options' : {'driver': 'raw', 'id':'drive-disk1', 'read-only': true, 'discard':'unmap', 'file': {'driver': 'file', 'filename': '/home/my-data-disk.raw'}, 'cache': { 'writeback': false, 'direct': true, 'no-flush': false } } } }
    {"error": {"class": "GenericError", "desc": "QMP input object member 'writeback' is unexpected"}}

Actual result of my local test build:

    [Usual startup I/O...]
    { "execute": "blockdev-add", "arguments": {'options' : {'driver': 'raw', 'id':'drive-disk1', 'read-only': true, 'discard':'unmap', 'file': {'driver': 'file', 'filename': '/home/my-data-disk.raw'}, 'cache': { 'writeback': false, 'direct': true, 'no-flush': false } } } }
    Segmentation fault (core dumped)

I get the expected result with my local test build of my fix for bug
1362084.

Closing as duplicate of 1362084.

*** This bug has been marked as a duplicate of bug 1362084 ***


Note You need to log in before you can comment on or make changes to this bug.