RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1318181 - qemu-kvm gets SIGSEGV when hot-unplug disk
Summary: qemu-kvm gets SIGSEGV when hot-unplug disk
Keywords:
Status: CLOSED DUPLICATE of bug 1341531
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Markus Armbruster
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1318490 1334340 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-16 08:50 UTC by Han Han
Modified: 2016-07-22 05:47 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-22 05:47:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Han Han 2016-03-16 08:50:37 UTC
Description of problem:
As subject

Version-Release number of selected component (if applicable):
libvirt-1.3.2-1.el7.x86_64
qemu-kvm-rhev-2.5.0-2.el7.x86_64
kernel-3.10.0-327.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a guest with os, two disks
# virsh list
 Id    Name                           State
----------------------------------------------------
 3     RH                             running

# virsh dumpxml RH|awk '/<disk/,/<\/disk/'                                                            
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/V.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/tmp/haha'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
2. hot-unplug a disk
# virsh detach-disk RH vdb                
error: Failed to detach disk
error: Unable to read from monitor: Connection reset by peer

# abrt-cli ls            
id e312c2f2c89bed247bd07f55626298de1b462a8e
reason:         qemu-kvm killed by SIGSEGV
time:           Wed 16 Mar 2016 04:17:19 PM CST
cmdline:        /usr/libexec/qemu-kvm -name RH -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off,vmport=off -cpu Opteron_G3 -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid d7038a27-800c-4871-97d0-9dcac588418e -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-RH/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x9 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/!
 lib/libvirt/images/V.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/tmp/haha,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:bb:a9:2d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-RH/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5900,addr=0.0.0.0,!
 disable-ticketing,image-compression=off,seamless-migration=on -device
qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -drive file=/dev/sg1,if=none,id=drive-hostdev0 -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-hostdev0,id=hostdev0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on
package:        qemu-kvm-rhev-2.5.0-2.el7
uid:            107 (qemu)
count:          1
Directory:      /var/spool/abrt/ccpp-2016-03-16-04:17:19-5199
Run 'abrt-cli report /var/spool/abrt/ccpp-2016-03-16-04:17:19-5199' for creating a case in Red Hat Customer Portal
Actual results:
As step2

Expected results:
No SIGSEGV

Additional info:
It is ok in qemu-kvm-rhev-2.3.0-31.el7 .

Comment 1 Han Han 2016-03-16 08:53:47 UTC
The backtrace:
#0  qstring_get_str (qstring=0x0) at qobject/qstring.c:134
No locals.
#1  0x00007f6bf1c331ed in qdict_get_str (qdict=<optimized out>, key=key@entry=0x7f6bf1cc095d "id") at qobject/qdict.c:285
No locals.
#2  0x00007f6bf1a8e437 in hmp_drive_del (mon=<optimized out>, qdict=<optimized out>) at blockdev.c:2741
        id = <optimized out>
        blk = <optimized out>
        bs = <optimized out>
        aio_context = <optimized out>
        local_err = 0x7f6bf4943200
#3  0x00007f6bf19ca65b in handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-2.5.0/monitor.c:3905
        local_err = 0x0
        obj = <optimized out>
        data = 0x0
        input = <optimized out>
        args = 0x7f6bf4936800
        cmd_name = <optimized out>
        mon = 0x7f6bf35d1500
        __func__ = "handle_qmp_command"
#4  0x00007f6bf1c34c70 in json_message_process_token (lexer=0x7f6bf35d1568, input=0x7f6bf35e8ba0, type=JSON_RCURLY, x=94, y=32) at qobject/json-streamer.c:93
        parser = 0x7f6bf35d1560
        token = 0x7f6bf4f23640
#5  0x00007f6bf1c48973 in json_lexer_feed_char (lexer=lexer@entry=0x7f6bf35d1568, ch=125 '}', flush=flush@entry=false) at qobject/json-lexer.c:310
        new_state = <optimized out>
        __PRETTY_FUNCTION__ = "json_lexer_feed_char"
#6  0x00007f6bf1c48a3e in json_lexer_feed (lexer=0x7f6bf35d1568, buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:360
        err = <optimized out>
        i = <optimized out>
#7  0x00007f6bf1c34d69 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:113
No locals.
#8  0x00007f6bf19c895b in monitor_qmp_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-2.5.0/monitor.c:3921
        old_mon = 0x0
#9  0x00007f6bf1a9614e in qemu_chr_be_write (len=<optimized out>, buf=0x7fff79920e50 "}\016\222y\377\177", s=0x7f6bf35ce880) at qemu-char.c:280
No locals.
#10 tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x7f6bf35ce880) at qemu-char.c:2902
        chr = 0x7f6bf35ce880
        s = 0x7f6bf35a0540
        buf = "}\016\222y\377\177\000\000 \"\356\341k\177\000\000x\"\222y\377\177", '\000' <repeats 18 times>, "\020", '\000' <repeats 15 times>, "\001", '\000' <repeats 151 times>...
        len = <optimized out>
        size = <optimized out>
#11 0x00007f6be6d8679a in g_main_dispatch (context=0x7f6bf3557200) at gmain.c:3109
        dispatch = 0x7f6be6dca050 <g_io_unix_dispatch>
        prev_source = 0x0
        was_in_call = 0
        user_data = 0x7f6bf35ce880
        callback = 0x7f6bf1a960c0 <tcp_chr_read>
        cb_funcs = 0x7f6be70728a0 <g_source_callback_funcs>
        cb_data = 0x7f6bf35ed080
        need_destroy = <optimized out>
        source = 0x7f6bf3553980
        current = 0x7f6bf355cc50
        i = 0
#12 g_main_context_dispatch (context=context@entry=0x7f6bf3557200) at gmain.c:3708
No locals.
#13 0x00007f6bf1bbffc0 in glib_pollfds_poll () at main-loop.c:211
        context = 0x7f6bf3557200
        pfds = <optimized out>
#14 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:256
        ret = 2
        spin_counter = 0
#15 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:504
        ret = 2
        timeout = 4294967295
        timeout_ns = <optimized out>
#16 0x00007f6bf199e65e in main_loop () at vl.c:1923
        nonblocking = <optimized out>
        last_io = 2
#17 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4695
        i = <optimized out>
        snapshot = <optimized out>
        linux_boot = <optimized out>
        initrd_filename = <optimized out>
        kernel_filename = <optimized out>
        kernel_cmdline = <optimized out>
        boot_order = 0x7f6bf1c5f4b2 "cad"
        boot_once = 0x0
        cyls = <optimized out>
        heads = <optimized out>
        secs = <optimized out>
        translation = <optimized out>
        hda_opts = <optimized out>
        opts = <optimized out>
        machine_opts = <optimized out>
        icount_opts = <optimized out>
        olist = <optimized out>
        optind = 96
        optarg = 0x7f6bf35ec980 "pc-i440fx-rhel7.2.0"
        loadvm = <optimized out>
        machine_class = <optimized out>
        cpu_model = <optimized out>
        vga_model = 0x0
        qtest_chrdev = <optimized out>
        qtest_log = <optimized out>
        pid_file = <optimized out>
        incoming = <optimized out>
        show_vnc_port = <optimized out>
        defconfig = <optimized out>
        userconfig = false
        log_mask = <optimized out>
        log_file = <optimized out>
        trace_events = <optimized out>
        trace_file = <optimized out>
        maxram_size = <optimized out>
        ram_slots = <optimized out>
        vmstate_dump_file = <optimized out>
        main_loop_err = 0x0
        err = 0x0
        __func__ = "main"

Comment 2 Han Han 2016-03-16 09:02:21 UTC
The bug seems fixed in upstream, qemu-kvm-2.5.0-9.fc24.x86_64 not reproduced.

Comment 6 Wayne Sun 2016-03-17 07:16:14 UTC
*** Bug 1318490 has been marked as a duplicate of this bug. ***

Comment 10 Laurent Vivier 2016-05-11 15:43:00 UTC
*** Bug 1334340 has been marked as a duplicate of this bug. ***

Comment 14 Han Han 2016-06-01 03:01:34 UTC
Hostdev device also have the problem. eg. unplug following xml:
<hostdev mode='subsystem' type='scsi' managed='no'>
<source>
<adapter name='scsi_host4'/>
<address bus='0' target='0' unit='0'/>
</source>
<alias name='hostdev0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</hostdev>

Comment 15 Markus Armbruster 2016-07-21 14:04:10 UTC
I suspect this is duplicate of bug 1341531.  We fixed that one in qemu-kvm-rhev-2.6.0-12.el7.  Could you please retest this bug with that version?  If it appears to be fixed there, also testing the version before would be nice.

Comment 16 Han Han 2016-07-22 01:43:40 UTC
I test it on qemu-kvm-rhev-2.6.0-12.el7 and qemu-kvm-rhev-2.6.0-11.el7 via detach virtio disk. qemu-kvm-rhev-2.6.0-12.el7 is fixed while qemu-kvm-rhev-2.6.0-11.el7 is unfixed.

Comment 17 Markus Armbruster 2016-07-22 05:47:44 UTC
Han Han, thank you very much for your prompt testing.

*** This bug has been marked as a duplicate of bug 1341531 ***


Note You need to log in before you can comment on or make changes to this bug.