RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1269715 - Can't start VM with memory modules if memory placement is auto
Summary: Can't start VM with memory modules if memory placement is auto
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-08 04:04 UTC by Luyao Huang
Modified: 2016-11-03 18:25 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.3.3-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 18:25:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Luyao Huang 2015-10-08 04:04:02 UTC
Description of problem:
Cannot hotplug a memory device if cpu placement is auto

Version-Release number of selected component (if applicable):
libvirt-1.2.17-12.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.# virsh dumpxml rhel7.0-rhel
<domain type='kvm' id='3'>
  <name>rhel7.0-rhel</name>
  <uuid>67c7a123-5415-4136-af62-a2ee098ba6cd</uuid>
  <maxMemory slots='16' unit='KiB'>15243264</maxMemory>
  <memory unit='KiB'>1536000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
  <vcpu placement='auto' current='2'>4</vcpu>
  <iothreads>1</iothreads>
  <cputune>
    <iothreadpin iothread='1' cpuset='3'/>
  </cputune>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>


2.
# cat memdevice.xml
    <memory model='dimm'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
      </target>
    </memory>

3.
# virsh attach-device rhel7.0-rhel memdevice.xml
error: Failed to attach device from memdevice.xml
error: internal error: Advice from numad is needed in case of automatic numa placement

Actual results:
hot-plug failed

Expected results:
hot-plug success

Additional info:

Comment 1 Ján Tomko 2015-10-08 08:24:14 UTC
Libvirtd could possibly reuse the outdated advice received from numad at domain startup. Even though the domain would now require more memory than originally requested, there could be a chance the allocation will succeed. On the other hand, allowing this might trick people into thinking the combination of numad and memory hotplug does anything useful.

Comment 2 Luyao Huang 2015-10-09 01:05:28 UTC
(In reply to Ján Tomko from comment #1)
> Libvirtd could possibly reuse the outdated advice received from numad at
> domain startup. Even though the domain would now require more memory than
> originally requested, there could be a chance the allocation will succeed.
> On the other hand, allowing this might trick people into thinking the
> combination of numad and memory hotplug does anything useful.

Hi Jan,

Thanks your quick reply and i found i can hit this error when start a guest, 

1.
# virsh dumpxml rhel7.0-rhel
<domain type='kvm'>
  <name>rhel7.0-rhel</name>
  <uuid>67c7a123-5415-4136-af62-a2ee098ba6cd</uuid>
  <maxMemory slots='16' unit='KiB'>15243264</maxMemory>
  <memory unit='KiB'>1667072</memory>
...
    <memory model='dimm'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
      </target>
    </memory>
...

2.
# virsh start rhel7.0-rhel
error: Failed to start domain rhel7.0-rhel
error: internal error: Advice from numad is needed in case of automatic numa placement

3. check the debug log:

2015-10-08 09:04:41.822+0000: 16772: debug : virCommandRunAsync:2428 : About to run /bin/numad -w 2:1628

we can found libvirt used 1628M (1667072KiB, and already include memory device), why we forbid start the guest in this case ? and how about the vcpus hot-plug when set vcpu placement is auto and numatune placement is auto (we didn't forbid hot-plug vcpu in that case ) ?

Thanks in advance for your reply.

Comment 3 Peter Krempa 2015-11-12 08:57:56 UTC
I'll have a look to see whether it makes sense or how we should forbid using auto placement with resource hotplug.

Comment 4 Peter Krempa 2016-03-24 15:15:05 UTC
The original report is not really a bug, since the numad advice is not valid at the point of hotplug. The issue described in Comment 2 is valid though.

Comment 5 Peter Krempa 2016-03-30 16:16:54 UTC
Fixed upstream:

commit 25c39f76b80a5453551e8242e99ebc1986ed0d77
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 24 16:05:11 2016 +0100

    qemu: command: Pass numad nodeset when formatting memory devices at boot
    
    When starting up a VM libvirtd asks numad to place the VM in case of
    automatic nodeset. The nodeset would not be passed to the memory device
    formatter and the user would get an error.
    

I might later add a patch that improves the error message in case of the original issue which isn't a bug.

Comment 7 Luyao Huang 2016-08-09 03:32:49 UTC
Verify this bug with libvirt-2.0.0-4.el7.x86_64:

1. prepare a guest like this:
...
  <maxMemory slots='16' unit='KiB'>15242882</maxMemory>
  <memory unit='KiB'>1179648</memory>
  <currentMemory unit='KiB'>1179648</currentMemory>
...
  <vcpu placement='auto' current='6'>9</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>
...
  <cpu>
    <numa>
      <cell id='0' cpus='0-2' memory='524288' unit='KiB'/>
      <cell id='1' cpus='3-5' memory='524288' unit='KiB'/>
    </numa>
  </cpu>
...
    <memory model='dimm'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
      </target>
    </memory>
...

2. start guest
# virsh start r7
Domain r7 started

3. check qemu cmd line

# virsh numatune r7
numa_mode      : strict
numa_nodeset   : 0-1

# ps aux|grep qemu
... -object memory-backend-ram,id=memdimm0,size=134217728,host-nodes=0-1,policy=bind -device pc-dimm,node=0,memdev=memdimm0,id=dimm0
...

4. libvirt still report old error when attach memdevice for a automatic nodeset guest

# cat memdevice.xml 
    <memory model='dimm'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
      </target>
    </memory>

# virsh attach-device r7 memdevice.xml 
error: Failed to attach device from memdevice.xml
error: internal error: Advice from numad is needed in case of automatic numa placement

Comment 9 errata-xmlrpc 2016-11-03 18:25:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.