Bug 1461676 - nvdimm hot-unplug support - libvirt
Summary: nvdimm hot-unplug support - libvirt
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: ---
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Jing Qi
URL:
Whiteboard:
Depends On: 1499124
Blocks: 1473046
TreeView+ depends on / blocked
 
Reported: 2017-06-15 07:14 UTC by chhu
Modified: 2021-01-15 07:38 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1499124 (view as bug list)
Environment:
Last Closed: 2021-01-15 07:38:11 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description chhu 2017-06-15 07:14:22 UTC
Description of problem:
hotplug nvdimm device successfully but not support hot-unplug

Version-Release number of selected component (if applicable):
libvirt-3.2.0-9.el7.x86_64
qemu-kvm-rhev-2.9.0-9.el7.x86_64
kernel-3.10.0-679.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create nvdimm file on the host:
     # truncate -s 512M /tmp/nvdimm
     # truncate -s 256M /tmp/nvdimm2

2. Start a guest with one nvdimm device successfully.
xml:
  <maxMemory slots='16' unit='M'>2048</maxMemory>
  <memory unit='M'>1024</memory>
  <currentMemory unit='M'>512</currentMemory>
  <vcpu placement='static'>4</vcpu>
  .......
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <numa>
      <cell id='0' cpus='0-1' memory='512' unit='M'/>
      <cell id='1' cpus='2-3' memory='512' unit='M'/>
    </numa>
  </cpu>
  ......
     <memory model='nvdimm' access='shared'>
      <source>
        <path>/tmp/nvdimm</path>
      </source>
      <target>
        <size unit='M'>512</size>
        <node>1</node>
        <label>
          <size unit='KiB'>256</size>
        </label>
      </target>
      <address type='dimm' slot='0'/>
    </memory>

3.  Attach the second nvdimm device to the guest successfully.
# cat nvdimm.xml
      <memory model='nvdimm' access='shared'>
      <source>
        <path>/tmp/nvdimm2</path>
      </source>
      <target>
        <size unit='M'>256</size>
        <node>1</node>
      </target>
      <address type='dimm' slot='1'/>
    </memory>
# virsh attach-device r7-4 nvdimm.xml
Device attached successfully

4. Login to the guest, check there are two nvdimm devices.
# ls /dev/pmem*
/dev/pmem0  /dev/pmem1

# fdisk -l
......
Disk /dev/pmem0: 536 MB, 536608768 bytes, 1048064 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/pmem1: 268 MB, 268435456 bytes, 524288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


5. Try to detach the second nvdimm device from the guest.
# cat nvdimm.xml 
      <memory model='nvdimm' access='shared'>
      <source>
        <path>/tmp/nvdimm2</path>
      </source>
      <target>
        <size unit='M'>256</size>
        <node>1</node>
      </target>
      <address type='dimm' slot='1'/>
    </memory>

# virsh detach-device r7-4 nvdimm.xml
error: Failed to detach device from nvdimm.xml
error: Requested operation is not valid: device not present in domain configuration

6. Try to detach the second nvdimm device with the xml: base=''
# cat nvdimm-detach.xml 
    <memory model='nvdimm' access='shared'>
      <source>
        <path>/tmp/nvdimm2</path>
      </source>
      <target>
        <size unit='KiB'>262144</size>
        <node>1</node>
      </target>
      <alias name='nvdimm1'/>
      <address type='dimm' slot='1' base='0x11ffc0000'/>
    </memory>

# virsh detach-device r7-4 nvdimm-detach.xml 
error: Failed to detach device from nvdimm-detach.xml
error: internal error: unable to execute QEMU command 'device_del': nvdimm device hot unplug is not supported yet.

Actual results:
In step5: failed to detach nvdimm device.
In step6: nvdimm device hot unplug is not support, but succeed to hotplug nvdimm device in Step3. 

Expected results:
In step5: succeed to detach nvdimm device.
In step3, step6: hotplug/ hot unplug nvdimm device are support, or both are not support. 


Additional info:
1. According to the comment 17 of bug1270345, qemu not support nvdimm device yet.

“The hotplug patch series has not yet been merged in qemu.git and is therefore not in RHEL.”
link: https://bugzilla.redhat.com/show_bug.cgi?id=1270345#c17

2. Try to attach nvdimm device to a guest(without nvdimm device), will get error message. See more in bug1460119:
# virsh attach-device r7 nvdimm.xml
error: Failed to attach device from nvdimm.xml
error: internal error: unable to execute QEMU command 'device_add': nvdimm is not enabled: missing 'nvdimm' in '-M'

Comment 2 Michal Privoznik 2017-09-13 15:20:38 UTC
So there are two problems here:

Firstly, libvirt needs to start qemu with '-M nvdimm=on' otherwise nvdimm hotplug is not possible. However, libvirt can't know in advance if users will want to hotplug a nvdimm some time in the future. I guess we can't just add nvdimm=on unconditionally (well, for those qemus which support it) as it might change guest ABI, right? BTW, in my testing I successfully migrated from no nvdimm to nvdimm=on; but my testing might be limited.

Secondly, in my testing, after I patched libvirt to enable nvdimm unconditionally I was able to hotplug a NVDIMM module but was unable to detach it afterwards:

error: internal error: unable to execute QEMU command 'device_del': nvdimm device hot unplug is not supported yet.

Looks like hotunplug is not implemented yet. I'll create a separate bug for that.

Comment 3 Michal Privoznik 2017-09-13 15:30:01 UTC
There's some discussion happening on the upstream list:

https://www.redhat.com/archives/libvir-list/2017-September/msg00328.html

Comment 12 RHEL Program Management 2021-01-15 07:38:11 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.