Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
This bug cannot be triggered by libvirt since the necessary -M nvdimm=on option is automatically added. Therefore customers are not affected by this bug.
The patch hasn't been merged upstream yet, so let's move it to RHEL 7.5.
Merged upstream in commit 7f3cf2d6e7d1231d854902c9016823961e59d1f4 ("hw/i386: fix nvdimm check error path"). This will come into RHEL 7.5 via the next rebase.
Reproduce:
qemu-kvm-rhev-2.9.0-12.el7
kernel-3.10.0-765.el7.x86_64
1. Boot guest without 'nvdimm' in '-M'
# /usr/libexec/qemu-kvm -m 5G,slots=4,maxmem=32G rhel75-64-virtio.qcow2 -monitor stdio -vnc :0 -M pc
2. Hotplug nvdimm twice
(qemu) object_add memory-backend-file,id=mem1,share=on,mem-path=/tmp/aa,size=1G
(qemu) device_add nvdimm,memdev=mem1,id=nvdimm1
nvdimm is not enabled: missing 'nvdimm' in '-M'
(qemu) device_add nvdimm,memdev=mem1,id=nvdimm1
Qemu quits with following message:
qemu-kvm: /builddir/build/BUILD/qemu-2.9.0/exec.c:1575: qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed.
Aborted (core dumped)
So the bug is reproduced.
Verify:
qemu-kvm-rhev-2.10.0-4.el7
kernel-3.10.0-765.el7.x86_64
With same step as above, after hotplug nvdimm twice, qemu doesn't quit and prints "nvdimm is not enabled: missing 'nvdimm' in '-M'" , and guest works well.
(qemu) object_add memory-backend-file,id=mem1,share=on,mem-path=/tmp/aa,size=1G
(qemu) device_add nvdimm,memdev=mem1,id=nvdimm1
nvdimm is not enabled: missing 'nvdimm' in '-M'
(qemu) device_add nvdimm,memdev=mem1,id=nvdimm1
nvdimm is not enabled: missing 'nvdimm' in '-M'
So the bug is fixed.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2018:1104
Created attachment 1286326 [details] qemu-kvm-rhev.btrace Description of problem: qemu gets SIGABRT when hot-plug nvdimm device twice Version-Release number of selected component (if applicable): libvirt-3.2.0-9.el7.x86_64 qemu-img-rhev-2.9.0-9.el7.x86_64 kernel-3.10.0-679.el7.x86_64 How reproducible: 100% Steps to reproduce: 1. Create nvdimm file on the host: # truncate -s 256M /tmp/nvdimm2 2. Start a guest without nvdimm device: <maxMemory slots='16' unit='M'>2048</maxMemory> <memory unit='M'>1024</memory> <currentMemory unit='M'>512</currentMemory> <vcpu placement='static'>4</vcpu> ....... <cpu mode='host-model' check='partial'> <model fallback='allow'/> <topology sockets='2' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='512' unit='M'/> <cell id='1' cpus='2-3' memory='512' unit='M'/> </numa> </cpu> ...... # virsh start r7t Domain r7t started # virsh list --all Id Name State ---------------------------------------------------- 3 r7t running 3. Try to attach a nvdimm device # cat nvdimm.xml <memory model='nvdimm' access='shar'> <source> <path>/tmp/nvdimm2</path> </source> <target> <size unit='M'>256</size> <node>1</node> <label> <size unit='KiB'>128</size> </label> </target> <address type='dimm' slot='1'/> </memory> # virsh attach-device r7t nvdimm.xml error: Failed to attach device from nvdimm.xml error: internal error: unable to execute QEMU command 'device_add': nvdimm is not enabled: missing 'nvdimm' in '-M' 4. Try to attach for a second time, failed and hit qemu-kvm-rhev SIGABRT. # virsh attach-device r7t nvdimm.xml error: Failed to attach device from nvdimm.xml error: Unable to read from monitor: Connection reset by peer Expected results: In step3,4: attach nvdimm device successfully, no qemu-kvm-rhev SIGABRT. Additional info: 1. The SIGABRT of qemu-kvm-rhev. # abrt-cli ls id d413ccc04bca786bd7f8f8e8a6012d333a5926f8 reason: qemu-kvm killed by SIGABRT ...... cmdline: /usr/libexec/qemu-kvm -name guest=r7t,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-r7t/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu SandyBridge,vme=on,ss=on,pcid=on,hypervisor=on,arat=on,tsc_adjust=on,xsaveopt=on -m size=1048576k,slots=16,maxmem=2097152k -realtime mlock=off -smp 4,sockets=2,cores=2,threads=1 -numa node,nodeid=0,cpus=0-1,mem=512 -numa node,nodeid=1,cpus=2-3,mem=512 -uuid 280040ed-d8df-4572-b851-02f932efd2ea -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-r7t/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0! ,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/lib/libvirt/images/rhel7-4.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:0d:c7,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-3-r7t/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,a! ddr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on package: qemu-kvm-rhev-2.9.0-9.el7 uid: 107 (qemu) Directory: /var/spool/abrt/ccpp-2017-06-09-01:54:50-13793 Run 'abrt-cli report /var/spool/abrt/ccpp-2017-06-09-01:54:50-13793' for creating a case in Red Hat Customer Portal 2. Attached files: libvirtd.log, qemu-kvm-rhev.btrace