Bug 1460119 - qemu gets SIGABRT when hot-plug nvdimm device twice
qemu gets SIGABRT when hot-plug nvdimm device twice
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev (Show other bugs)
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Stefan Hajnoczi
Yumei Huang
Depends On:
Blocks: 1473046
  Show dependency treegraph
Reported: 2017-06-09 03:48 EDT by chhu
Modified: 2018-04-10 20:25 EDT (History)
12 users (show)

See Also:
Fixed In Version: qemu-kvm-rhev-2.10.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2018-04-10 20:23:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
qemu-kvm-rhev.btrace (25.26 KB, text/plain)
2017-06-09 03:48 EDT, chhu
no flags Details
libvirtd.log (24.80 KB, text/plain)
2017-06-09 04:08 EDT, chhu
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:1104 None None None 2018-04-10 20:25 EDT

  None (edit)
Description chhu 2017-06-09 03:48:19 EDT
Created attachment 1286326 [details]

Description of problem:
qemu gets SIGABRT when hot-plug nvdimm device twice

Version-Release number of selected component (if applicable):

How reproducible:

Steps to reproduce:
1. Create nvdimm file on the host:
     # truncate -s 256M /tmp/nvdimm2

2. Start a guest without nvdimm device:
 <maxMemory slots='16' unit='M'>2048</maxMemory>
  <memory unit='M'>1024</memory>
  <currentMemory unit='M'>512</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='1'/>
      <cell id='0' cpus='0-1' memory='512' unit='M'/>
      <cell id='1' cpus='2-3' memory='512' unit='M'/>

# virsh start r7t
Domain r7t started

# virsh list --all
 Id    Name                           State
 3     r7t                            running

3. Try to attach a nvdimm device
# cat nvdimm.xml
   <memory model='nvdimm' access='shar'>
        <size unit='M'>256</size>
          <size unit='KiB'>128</size>
      <address type='dimm' slot='1'/>

# virsh attach-device r7t nvdimm.xml
error: Failed to attach device from nvdimm.xml
error: internal error: unable to execute QEMU command 'device_add': nvdimm is not enabled: missing 'nvdimm' in '-M'

4. Try to attach for a second time, failed and hit qemu-kvm-rhev SIGABRT.
# virsh attach-device r7t nvdimm.xml
error: Failed to attach device from nvdimm.xml
error: Unable to read from monitor: Connection reset by peer

Expected results:
In step3,4: attach nvdimm device successfully, no qemu-kvm-rhev SIGABRT.

Additional info:
1. The SIGABRT of qemu-kvm-rhev.
# abrt-cli ls
id d413ccc04bca786bd7f8f8e8a6012d333a5926f8
reason:         qemu-kvm killed by SIGABRT
cmdline:        /usr/libexec/qemu-kvm -name guest=r7t,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-r7t/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu SandyBridge,vme=on,ss=on,pcid=on,hypervisor=on,arat=on,tsc_adjust=on,xsaveopt=on -m size=1048576k,slots=16,maxmem=2097152k -realtime mlock=off -smp 4,sockets=2,cores=2,threads=1 -numa node,nodeid=0,cpus=0-1,mem=512 -numa node,nodeid=1,cpus=2-3,mem=512 -uuid 280040ed-d8df-4572-b851-02f932efd2ea -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-r7t/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0!
 ,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/lib/libvirt/images/rhel7-4.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:0d:c7,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-3-r7t/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc -device cirrus-vga,id=video0,bus=pci.0,a!
 ddr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg
package:        qemu-kvm-rhev-2.9.0-9.el7
uid:            107 (qemu)
Directory:      /var/spool/abrt/ccpp-2017-06-09-01:54:50-13793
Run 'abrt-cli report /var/spool/abrt/ccpp-2017-06-09-01:54:50-13793' for creating a case in Red Hat Customer Portal

2. Attached files: libvirtd.log, qemu-kvm-rhev.btrace
Comment 2 chhu 2017-06-09 04:08 EDT
Created attachment 1286332 [details]
Comment 3 Stefan Hajnoczi 2017-06-09 11:16:54 EDT
Patch posted upstream: [PATCH] hw/i386: fix nvdimm check error path
Comment 5 Stefan Hajnoczi 2017-06-16 10:21:46 EDT
This bug cannot be triggered by libvirt since the necessary -M nvdimm=on option is automatically added.  Therefore customers are not affected by this bug.

The patch hasn't been merged upstream yet, so let's move it to RHEL 7.5.
Comment 6 Stefan Hajnoczi 2017-06-22 10:22:03 EDT
Merged upstream in commit 7f3cf2d6e7d1231d854902c9016823961e59d1f4 ("hw/i386: fix nvdimm check error path").  This will come into RHEL 7.5 via the next rebase.
Comment 9 Yumei Huang 2017-11-08 03:40:51 EST

1. Boot guest without 'nvdimm' in '-M'

# /usr/libexec/qemu-kvm -m 5G,slots=4,maxmem=32G rhel75-64-virtio.qcow2 -monitor stdio -vnc :0 -M pc

2. Hotplug nvdimm twice

(qemu) object_add memory-backend-file,id=mem1,share=on,mem-path=/tmp/aa,size=1G
(qemu)  device_add nvdimm,memdev=mem1,id=nvdimm1
nvdimm is not enabled: missing 'nvdimm' in '-M'
(qemu)  device_add nvdimm,memdev=mem1,id=nvdimm1

Qemu quits with following message:

qemu-kvm: /builddir/build/BUILD/qemu-2.9.0/exec.c:1575: qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed.
Aborted (core dumped)

So the bug is reproduced.


With same step as above, after hotplug nvdimm twice, qemu doesn't quit and prints "nvdimm is not enabled: missing 'nvdimm' in '-M'" , and guest works well.

(qemu) object_add memory-backend-file,id=mem1,share=on,mem-path=/tmp/aa,size=1G
(qemu) device_add nvdimm,memdev=mem1,id=nvdimm1
nvdimm is not enabled: missing 'nvdimm' in '-M'
(qemu) device_add nvdimm,memdev=mem1,id=nvdimm1
nvdimm is not enabled: missing 'nvdimm' in '-M'

So the bug is fixed.
Comment 11 errata-xmlrpc 2018-04-10 20:23:04 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.