Bug 1833261 - qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed.
Summary: qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.3
Assignee: Igor Mammedov
QA Contact: Yumei Huang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-08 08:52 UTC by Yumei Huang
Modified: 2020-11-17 17:48 UTC (History)
6 users (show)

Fixed In Version: qemu-kvm-5.1.0-2.module+el8.3.0+7652+b30e6901
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 17:48:34 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yumei Huang 2020-05-08 08:52:42 UTC
Description of problem:
When -numa node,memdev and -M pc,memory-backend use same memdev, qemu core dumped.

Version-Release number of selected component (if applicable):
qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422
kernel-4.18.0-194.el8.x86_64

How reproducible:
always

Steps to Reproduce:
1. # /usr/libexec/qemu-kvm \
     -object memory-backend-ram,id=mem0,size=4G \
     -M pc,memory-backend=mem0 \
     -m 4G -monitor stdio \
     -numa node,memdev=mem0
2.
3.

Actual results:
(qemu) qemu-kvm: /builddir/build/BUILD/qemu-5.0.0-rc4/exec.c:2006: qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed.
Aborted (core dumped)

Expected results:
No core dump but an error message.

Additional info:
(gdb) bt full
#0  0x00007f6d174c47ff in raise () at /lib64/libc.so.6
#1  0x00007f6d174aec35 in abort () at /lib64/libc.so.6
#2  0x00007f6d174aeb09 in _nl_load_domain.cold.0 () at /lib64/libc.so.6
#3  0x00007f6d174bcde6 in .annobin_assert.c_end () at /lib64/libc.so.6
#4  0x000055c9f3da30c6 in qemu_ram_set_idstr ()
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/exec.c:2006
#5  0x000055c9f3fdb1e4 in vmstate_register_ram (mr=mr@entry=0x55c9f55da300, dev=dev@entry=0x0)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/migration/savevm.c:2921
#6  0x000055c9f3fdb228 in vmstate_register_ram_global (mr=mr@entry=0x55c9f55da300)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/migration/savevm.c:2934
#7  0x000055c9f3f33bf9 in machine_consume_memdev
    (machine=machine@entry=0x55c9f55cf000, backend=0x55c9f55da2a0)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/hw/core/machine.c:1257
        ret = 0x55c9f55da300
#8  0x000055c9f3f3a62c in numa_init_memdev_container (ram=0x55c9f5591400, ms=0x55c9f55cf000)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/hw/core/numa.c:671
        size = <optimized out>
        backend = <optimized out>
        seg = <optimized out>
        i = 0
        addr = 0
        numa_total = <optimized out>
--Type <RET> for more, q to quit, c to continue without paging--  
        i = <optimized out>
        mc = <optimized out>
        __func__ = "numa_complete_configuration"
        numa_info = <optimized out>
        __PRETTY_FUNCTION__ = "numa_complete_configuration"
#9  0x000055c9f3f3a62c in numa_complete_configuration (ms=ms@entry=0x55c9f55cf000)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/hw/core/numa.c:763
        numa_total = <optimized out>
        i = <optimized out>
        mc = <optimized out>
        __func__ = "numa_complete_configuration"
        numa_info = <optimized out>
        __PRETTY_FUNCTION__ = "numa_complete_configuration"
#10 0x000055c9f3f33cf5 in machine_run_board_init (machine=0x55c9f55cf000)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/hw/core/machine.c:1273
        machine_class = 0x55c9f5588ec0
        __func__ = "machine_run_board_init"
#11 0x000055c9f3e873fe in qemu_init (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/softmmu/vl.c:4373
        i = <optimized out>
        snapshot = 0
        linux_boot = <optimized out>
        initrd_filename = 0x0
        kernel_filename = 0x0
--Type <RET> for more, q to quit, c to continue without paging--
        kernel_cmdline = <optimized out>
        boot_order = 0x55c9f41d5500 "cad"
        boot_once = <optimized out>
        ds = <optimized out>
        opts = <optimized out>
        machine_opts = <optimized out>
        icount_opts = <optimized out>
        accel_opts = <optimized out>
        olist = <optimized out>
        optind = 11
        optarg = 0x7fff5d4b941f "node,memdev=mem0"
        loadvm = 0x0
        machine_class = <optimized out>
        cpu_option = <optimized out>
        vga_model = 0x55c9f437aa34 "std"
        qtest_chrdev = 0x0
        qtest_log = 0x0
        incoming = 0x0
        userconfig = <optimized out>
        nographic = false
        display_remote = <optimized out>
        log_mask = 0x0
        log_file = 0x0
        trace_file = <optimized out>
        maxram_size = <optimized out>
--Type <RET> for more, q to quit, c to continue without paging--
        ram_slots = 0
        vmstate_dump_file = 0x0
        main_loop_err = 0x0
        err = 0x0
        list_data_dirs = false
        dir = <optimized out>
        mem_path = 0x0
        have_custom_ram_size = <optimized out>
        bdo_queue = {sqh_first = 0x0, sqh_last = 0x7fff5d4b7b20}
        plugin_list = {tqh_first = 0x0, tqh_circ = {tql_next = 0x0, tql_prev = 0x7fff5d4b7b30}}
        mem_prealloc = 0
        __func__ = "qemu_init"
#12 0x000055c9f3d9e6bd in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
    at /usr/src/debug/qemu-kvm-5.0.0-0.scrmod+el8.3.0+6399+84188c13.wrb200422.x86_64/softmmu/main.c:48

Comment 1 John Ferlan 2020-05-08 13:23:31 UTC
Igor - assigning directly to you as this would seem related to your commit 6b61c2c59 (or other nearby/related commits).

Setting ITR=8.3.0 for RHEL-AV

Comment 2 Igor Mammedov 2020-05-11 10:09:23 UTC
It looks like invalid usage 
  -M pc,memory-backend=mem0
and
  -numa node,memdev=mem0

the same backend can't be used consumed by more than 1 user

so if you need numa with memdev leave out "memory-backend=".
I'll post a patch to check for it and error out in sane way.

Comment 3 Igor Mammedov 2020-05-13 12:04:05 UTC
posted upstream
https://www.mail-archive.com/qemu-devel@nongnu.org/msg702151.html

Comment 5 Igor Mammedov 2020-08-10 09:52:20 UTC
commit ea81f98bce48fc424960ca180fe2ccad0427bfc7 "numa: prevent usage of -M memory-backend and -numa memdev at the same time"
should be released as part of qemu-5.1

considering it is invalid configuration, it's probably not worth of back porting to 8.3-AV
(end result is the same - user won't be not able to start QEMU but with a nicer error message vs sigfault)

Comment 7 Danilo de Paula 2020-08-13 02:15:27 UTC
Moved to MODIFIED per comment 5.

Comment 11 Yumei Huang 2020-08-13 10:30:24 UTC
Verify:
qemu-kvm-5.1.0-2.module+el8.3.0+7652+b30e6901
kernel-4.18.0-227.el8.x86_64

No core dumped when -numa node,memdev and -M pc,memory-backend use same memdev, qemu quit and print an error message.

# /usr/libexec/qemu-kvm     \
 -object memory-backend-ram,id=mem0,size=4G     \
 -M pc,memory-backend=mem0     \
 -m 4G -monitor stdio     \
 -numa node,memdev=mem0
QEMU 5.1.0 monitor - type 'help' for more information
(qemu) qemu-kvm: '-machine memory-backend' and '-numa memdev' properties are mutually exclusive

Comment 14 errata-xmlrpc 2020-11-17 17:48:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5137


Note You need to log in before you can comment on or make changes to this bug.