Bug 1352769

Summary: QEMU core dumped when query memory devices in hmp after unplugging memdev of nvdimm
Product: Red Hat Enterprise Linux 7 Reporter: Yumei Huang <yuhuang>
Component: qemu-kvm-rhevAssignee: Igor Mammedov <imammedo>
Status: CLOSED ERRATA QA Contact: Yumei Huang <yuhuang>
Severity: high Docs Contact:
Priority: high    
Version: 7.3CC: chayang, hhuang, jinzhao, juzhang, knoel, mdeng, michen, mrezanin, qzhang, virt-maint, xfu
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: QEMU 2.8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 23:32:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yumei Huang 2016-07-05 03:16:38 UTC
Description of problem:
Boot guest with a nvdimm device, unplug the memdev of the nvdimm, then query memory devices in hmp, QEMU core dumped.  

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.6.0-11.el7
kernel-3.10.0-456.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. boot guest with a nvdimm backed by a regular file in host
# /usr/libexec/qemu-kvm -name rhel73 -m 8G,slots=240,maxmem=20G -smp 4 \

 -realtime mlock=off  -no-user-config -nodefaults \

-drive file=/home/guest/RHEL-Server-7.3-64-virtio-scsi.qcow2,if=none,id=drive-disk,format=qcow2,cache=none  -device virtio-scsi-pci,id=scsi0 -device scsi-hd,drive=drive-disk,bus=scsi0.0,id=scsi-hd0 \

-netdev tap,id=hostnet1 -device virtio-net-pci,mac=42:ce:a9:d2:4d:d7,id=idlbq7eA,netdev=hostnet1 \

-usb -device usb-tablet,id=input0 -vga qxl -spice port=5902,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on -monitor stdio \

-object memory-backend-file,mem-path=/home/guest/test.img,id=mem0,size=2G,share -device nvdimm,memdev=mem0,id=nvdimm0 \

-M pc,nvdimm=on

2. hot-unplug the builtin object 
(qemu) object_del mem0 

3. query memory devices in hmp
(qemu) info memory-devices 


Actual results:
(qemu) info memory-devices 
**
ERROR:qom/object.c:1577:object_get_canonical_path_component: assertion failed: (obj->parent != NULL)
Aborted (core dumped)


Expected results:
QEMU works well. 

Additional info:

Comment 2 Igor Mammedov 2016-09-20 11:53:33 UTC
Moving to 7.4 as it's too late for 7.3 for TP only feature fixes

Comment 3 Igor Mammedov 2017-01-03 13:45:00 UTC
looks like fixed upstream,
tried with 2.8, as result:

(qemu) object_del mem0
object 'mem0' is in use, can not be deleted


Pls retest once rebase to 2.8 is available

Comment 6 Yumei Huang 2017-02-17 06:01:57 UTC
Reproduce:
qemu-kvm-rhev-2.6.0-29.el7
kernel-3.10.0-558.el7.x86_64

Steps:
1. Boot guest with nvdimm 
# /usr/libexec/qemu-kvm -m 4G,slots=40,maxmem=40G rhel74-64-virtio-scsi.qcow2 \

 -netdev tap,id=idinWyYp,vhost=on -device virtio-net-pci,mac=42:ce:a9:d2:4d:d7,id=idlbq7eA,netdev=idinWyYp \

-monitor stdio -vnc :1 -serial unix:/tmp/console,server,nowait  \

-object memory-backend-file,mem-path=/home/guest/test.img,id=mem0,size=2G,share -device nvdimm,memdev=mem0,id=nvdimm0 \

-M pc,nvdimm=on

2. hot-unplug the builtin object 
(qemu) object_del mem0 

3. query memory devices in hmp
(qemu) info memory-devices 

QEMU core dumped:
(qemu) object_del mem0 
(qemu) info memory-devices 
**
ERROR:qom/object.c:1585:object_get_canonical_path_component: assertion failed: (obj->parent != NULL)
Aborted (core dumped)

Verify:
qemu-kvm-rhev-2.8.0-4.el7
kernel-3.10.0-558.el7.x86_64

With same steps as above, hmp output as expected and guest works well. 

(qemu) object_del mem0 
object 'mem0' is in use, can not be deleted
(qemu)  info memory-devices 
Memory device [dimm]: "nvdimm0"
  addr: 0x140000000
  slot: 0
  node: 0
  size: 2147483648
  memdev: /objects/mem0
  hotplugged: false
  hotpluggable: true

So the bug is fixed.

Comment 8 errata-xmlrpc 2017-08-01 23:32:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 9 errata-xmlrpc 2017-08-02 01:09:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 10 errata-xmlrpc 2017-08-02 02:01:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 11 errata-xmlrpc 2017-08-02 02:42:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 12 errata-xmlrpc 2017-08-02 03:07:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 13 errata-xmlrpc 2017-08-02 03:27:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392