Bug 1854684 - Fail to attach nvdimm device when pmem enabled
Summary: Fail to attach nvdimm device when pmem enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 8.3
Assignee: Peter Krempa
QA Contact: Jing Qi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-08 02:05 UTC by Luyao Huang
Modified: 2020-11-17 17:50 UTC (History)
6 users (show)

Fixed In Version: libvirt-6.6.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 17:50:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Luyao Huang 2020-07-08 02:05:27 UTC
Description of problem:
Fail to attach nvdimm device when pmem enabled

Version-Release number of selected component (if applicable):
libvirt-daemon-6.4.0-1.module+el8.3.0+6881+88468c00.x86_64
qemu-kvm-5.0.0-0.module+el8.3.0+6620+5d5e1420.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare a running guest which have maxmemory and numa element

# virsh dumpxml vm1
# virsh dumpxml vm1
...
  <maxMemory slots='16' unit='KiB'>15243264</maxMemory>
...
    <numa>
      <cell id='0' cpus='0-1' memory='512000' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='512000' unit='KiB'/>
    </numa>
...

2. hotplug a pmem enabled nvdimm device

# cat mem.xml 
    <memory model='nvdimm' access='shared'>
      <source>
        <path>/mnt2/test.pmem</path>
        <pmem/>
      </source>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='nvdimm0'/>
      <address type='dimm' slot='0'/>
    </memory>

# virsh attach-device vm1 mem.xml 
error: Failed to attach device from mem.xml
error: internal error: unable to execute QEMU command 'object-add': Invalid parameter type for 'pmem', expected: boolean


3. 

Actual results:
Fail to attach nvdimm device when pmem enabled

Expected results:
hotplug nvdimm device success

Additional info:

libvirtd debug log:

2020-07-08 01:51:41.862+0000: 60274: debug : qemuDomainObjEnterMonitorInternal:6678 : Entering monitor (mon=0x7f7438032c00 vm=0x7f73fc378010 name=vm1)
2020-07-08 01:51:41.862+0000: 60274: debug : qemuMonitorAddObject:2960 : type=memory-backend-file id=memnvdimm0
2020-07-08 01:51:41.862+0000: 60274: debug : qemuMonitorAddObject:2962 : mon:0x7f7438032c00 vm:0x7f73fc378010 fd:37
2020-07-08 01:51:41.862+0000: 60274: info : qemuMonitorSend:935 : QEMU_MONITOR_SEND_MSG: mon=0x7f7438032c00 msg={"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memnvdimm0","props":{"prealloc":true,"mem-path":"/mnt2/test.pmem","share":true,"size":536870912,"pmem":"on"}},"id":"libvirt-373"}^M
 fd=-1
2020-07-08 01:51:41.862+0000: 60394: info : qemuMonitorIOWrite:431 : QEMU_MONITOR_IO_WRITE: mon=0x7f7438032c00 buf={"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memnvdimm0","props":{"prealloc":true,"mem-path":"/mnt2/test.pmem","share":true,"size":536870912,"pmem":"on"}},"id":"libvirt-373"}^M
 len=207 ret=207 errno=0
2020-07-08 01:51:41.863+0000: 60394: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"id": "libvirt-373", "error": {"class": "GenericError", "desc": "Invalid parameter type for 'pmem', expected: boolean"}}]
2020-07-08 01:51:41.863+0000: 60394: info : qemuMonitorJSONIOProcessLine:240 : QEMU_MONITOR_RECV_REPLY: mon=0x7f7438032c00 reply={"id": "libvirt-373", "error": {"class": "GenericError", "desc": "Invalid parameter type for 'pmem', expected: boolean"}}
2020-07-08 01:51:41.863+0000: 60274: debug : qemuMonitorJSONCheckErrorFull:402 : unable to execute QEMU command {"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memnvdimm0","props":{"prealloc":true,"mem-path":"/mnt2/test.pmem","share":true,"size":536870912,"pmem":"on"}},"id":"libvirt-373"}: {"id":"libvirt-373","error":{"class":"GenericError","desc":"Invalid parameter type for 'pmem', expected: boolean"}}
2020-07-08 01:51:41.863+0000: 60274: error : qemuMonitorJSONCheckErrorFull:416 : internal error: unable to execute QEMU command 'object-add': Invalid parameter type for 'pmem', expected: boolean
2020-07-08 01:51:41.863+0000: 60274: debug : qemuDomainObjExitMonitorInternal:6701 : Exited monitor (mon=0x7f7438032c00 vm=0x7f73fc378010 name=vm1)

Comment 1 Peter Krempa 2020-07-08 07:14:43 UTC
Wrong type is used to format the JSON object in libvirt. We need to use a boolean as reported by the error message. Command line interactions shiled this out unfortunately.

Comment 2 Peter Krempa 2020-07-08 09:49:59 UTC
Fixed upstream:

commit e95da4e5bf53ff977f440903df9f7343f2fb6f0e
Author: Peter Krempa <pkrempa>
Date:   Wed Jul 8 09:13:42 2020 +0200

    qemuBuildMemoryBackendProps: Use boolean type for 'pmem' property
    
    Commit 82576d8f35e used a string "on" to enable the 'pmem' property.
    This is okay for the command line visitor, but the property is declared
    as boolean in qemu and thus it will not work when using QMP.
    
    Modify the type to boolean. This changes the command line, but
    fortunately the command line visitor in qemu parses both 'yes' and 'on'
    as true for the property.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1854684
    
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ján Tomko <jtomko>

Comment 3 Jing Qi 2020-07-10 06:54:36 UTC
I built a new version libvirt-6.6.0-1.el8.x86_64 with upstream code.

Tried to verify the bug -

1. prepare a running guest which have maxmemory and numa element

# virsh dumpxml vm1
# virsh dumpxml vm1
...
  <maxMemory slots='16' unit='KiB'>15243264</maxMemory>
...
    <numa>
      <cell id='0' cpus='0-1' memory='512000' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='512000' unit='KiB'/>
    </numa>
...

2. hotplug a pmem enabled nvdimm device

# cat mem.xml 
    <memory model='nvdimm' access='shared'>
      <source>
        <path>/mnt2/test.pmem</path>
        <pmem/>
      </source>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='nvdimm0'/>
      <address type='dimm' slot='0'/>
    </memory>

# virsh attach-device vm1 mem.xml 
virsh attach-device avocado-vt-vm mem.xml 
error: Failed to attach device from mem.xml
error: internal error: unable to execute QEMU command 'device_add': nvdimm is not enabled: missing 'nvdimm' in '-M'

3. Added a nvdimm device in the domain xml before the vm started.
<memory model='nvdimm' access='shared'>
      <source>
        <path>/mnt2/test.pmem</path>
        <pmem/>
      </source>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='nvdimm0'/>
      <address type='dimm' slot='0'/>
    </memory>
# virsh start avocado-vt-vm
Domain avocado-vt-vm started

4. Add the nvdimm again -

<memory model='nvdimm' access='shared'>
      <source>
        <path>/mnt2/test.pmem</path>
        <pmem/>
      </source>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
</memory>
# virsh attach-device avocado-vt-vm mem.xml 
 Device attached successfully

Comment 6 Jing Qi 2020-09-14 09:08:34 UTC
Verified with libvirt-6.6.0-4.module+el8.3.0+7883+3d717aa8.x86_64 &
qemu-kvm-4.2.0-33.module+el8.3.0+7705+f09d73e4.x86_64.

Steps are the same as Comment 3 above.

Comment 9 errata-xmlrpc 2020-11-17 17:50:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5137


Note You need to log in before you can comment on or make changes to this bug.