RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1191567 - numa-enabled domains cannot be migrated from RHEL hosts older than 7.1 to 7.1
Summary: numa-enabled domains cannot be migrated from RHEL hosts older than 7.1 to 7.1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1185786 (view as bug list)
Depends On:
Blocks: 1035038 1190580 1191617 1194982 1196644
TreeView+ depends on / blocked
 
Reported: 2015-02-11 14:07 UTC by Roy Golan
Modified: 2015-11-19 06:14 UTC (History)
15 users (show)

Fixed In Version: libvirt-1.2.13-1.el7
Doc Type: Bug Fix
Doc Text:
A prior QEMU update introduced one-to-one Non-Uniform Memory Access (NUMA) memory pinning of guest NUMA nodes and host NUMA nodes, which also included a new way of NUMA specification at QEMU startup. However, the libvirt library previously always used the newer NUMA specification, even if one-on-one NUMA pinning was not specified in the libvirt configuration XML file. This caused the guest to have an incompatible application binary interface (ABI), which in turn led to failed migration of NUMA domains from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. With this update, libvirt only uses the newer NUMA specification when it is specified in the configuration, and the described NUMA domains migrate correctly.
Clone Of:
: 1191617 1194982 (view as bug list)
Environment:
Last Closed: 2015-11-19 06:14:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Roy Golan 2015-02-11 14:07:10 UTC
Description of problem:
run a VM on with numa using machine type rhel6.5.0 fails with 

Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2264, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3323, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3424, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: early end of file from monitor: possible problem:
2015-02-09T08:17:58.028331Z qemu-kvm: -numa memdev is not supported by machine rhel6.5.0

same machine works on 7.0 which probably didn't have the memdev option.

migrating that vm from a RHEL 7.0 to 7.1 will probably fail as well

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. run a vm with numa on a numa host with machine type rhel6.5.0
2.
3.

Actual results:
vm fail to start

Expected results:
libvirt should handle numa vms with emulated machines that don't support memdev

Additional info:

Comment 3 Michal Privoznik 2015-02-11 15:32:50 UTC
(In reply to Roy Golan from comment #0)
>

I'm afraid this is a bigger problem. First of all, this is a qemu bug. Qemu is lying about the devices it supports, making libvirt think that memdev is supported. However, fixing that, so that device is not reported for rhel6.5.0 machine type anymore will not help. Libvirt checks supported devices for 'none' machine type. In other words, machine type is not considered by libvirt at all.

I've asked online to find the proper solution:

https://www.redhat.com/archives/libvir-list/2015-February/msg00369.html

Stay tuned. Meanwhile, to get qemu guys attention I'm cloning this over to qemu-kvm-rhev.

Comment 4 dyuan 2015-02-12 02:52:23 UTC
Could you please provide the libvirt version ?

Hope this comment is helpful.
https://bugzilla.redhat.com/show_bug.cgi?id=1170093#c18

Comment 5 Roy Golan 2015-02-12 07:04:14 UTC
pasted from bug 1190580

Additional information about versions:
3.10.0-227.el7.x86_64
Red Hat Enterprise Linux Server release 7.1 (Maipo)
libvirt-1.2.8-16.el7.x86_64

Comment 6 dyuan 2015-02-12 10:16:58 UTC
(In reply to Roy Golan from comment #5)
> pasted from bug 1190580
> 
> Additional information about versions:
> 3.10.0-227.el7.x86_64
> Red Hat Enterprise Linux Server release 7.1 (Maipo)
> libvirt-1.2.8-16.el7.x86_64

Thanks.  Reproduced it.

Bug 1190580 just fixed the scenario with 
  <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='2048000'/>
    </numa>
  </cpu>

Comment 7 Roy Golan 2015-02-12 11:50:21 UTC
(In reply to dyuan from comment #6)
> (In reply to Roy Golan from comment #5)
> > pasted from bug 1190580
> > 
> > Additional information about versions:
> > 3.10.0-227.el7.x86_64
> > Red Hat Enterprise Linux Server release 7.1 (Maipo)
> > libvirt-1.2.8-16.el7.x86_64
> 
> Thanks.  Reproduced it.
> 
> Bug 1190580 just fixed the scenario with 
>   <cpu>
>     <numa>
>       <cell id='0' cpus='0-1' memory='2048000'/>
>     </numa>
>   </cpu>

isn't this what we already send out today?

Comment 8 Michal Privoznik 2015-02-12 15:11:56 UTC
Patches proposed upstream:

https://www.redhat.com/archives/libvir-list/2015-February/msg00410.html

Comment 9 Michal Privoznik 2015-02-12 17:05:40 UTC
These patches may be handy too:

https://www.redhat.com/archives/libvir-list/2015-February/msg00418.html

Comment 10 Eduardo Habkost 2015-02-12 19:20:27 UTC
Is anybody able to explain why the fix for bug 1175397 was not enough for this use case?

Comment 11 dyuan 2015-02-13 01:26:54 UTC
Before the bug 1175397 fixed, test with libvirt -11 and -machine rhel6.5.0
1. # virsh domxml-to-native qemu-argv guest.xml
   <cpu>
     <numa>
       <cell id='0' cpus='0-1' memory='2048000'/>
     </numa>
   </cpu>

-object memory-backend-ram,size=2000M,id=ram-node0
-numa node,nodeid=0,cpus=0-1,*memdev=ram-node0*

After libvirt -12, 
1. # virsh domxml-to-native qemu-argv guest.xml
   <cpu>
     <numa>
       <cell id='0' cpus='0-1' memory='2048000'/>
     </numa>
   </cpu>

-numa node,nodeid=0,cpus=0-1,*mem=2000*


For this bug, xml should be
   <numatune>
     <memory mode='strict' nodeset='0'/>
   </numatune>
   <cpu>
     <numa>
       <cell id='0' cpus='0-1' memory='2048000'/>
     </numa>
   </cpu>

-object memory-backend-ram,size=2000M,id=ram-node0,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-1,*memdev=ram-node0*

Comment 12 Michal Privoznik 2015-02-17 12:04:16 UTC
And I've just pushed patches upstream:

commit 7832fac84741d65e851dbdbfaf474785cbfdcf3c
Author:     Michal Privoznik <mprivozn>
AuthorDate: Thu Feb 12 17:43:27 2015 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Tue Feb 17 09:07:09 2015 +0100

    qemuBuildMemoryBackendStr: Report backend requirement more appropriately
    
    So, when building the '-numa' command line, the
    qemuBuildMemoryBackendStr() function does quite a lot of checks to
    chose the best backend, or to check if one is in fact needed. However,
    it returned that backend is needed even for this little fella:
    
      <numatune>
        <memory mode="strict" nodeset="0,2"/>
      </numatune>
    
    This can be guaranteed via CGroups entirely, there's no need to use
    memory-backend-ram to let qemu know where to get memory from. Well, as
    long as there's no <memnode/> element, which explicitly requires the
    backend. Long story short, we wouldn't have to care, as qemu works
    either way. However, the problem is migration (as always). Previously,
    libvirt would have started qemu with:
    
      -numa node,memory=X
    
    in this case and restricted memory placement in CGroups. Today, libvirt
    creates more complicated command line:
    
      -object memory-backend-ram,id=ram-node0,size=X
      -numa node,memdev=ram-node0
    
    Again, one wouldn't find anything wrong with these two approaches.
    Both work just fine. Unless you try to migrated from the older libvirt
    into the newer one. These two approaches are, unfortunately, not
    compatible. My suggestion is, in order to allow users to migrate, lets
    use the older approach for as long as the newer one is not needed.
    
    Signed-off-by: Michal Privoznik <mprivozn>

commit 38064806966c04d7cf7525cd78aa6f82bd09e6d0
Author:     Michal Privoznik <mprivozn>
AuthorDate: Thu Feb 12 17:39:34 2015 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Tue Feb 17 08:38:19 2015 +0100

    qemuxml2argvtest: Fake response from numad
    
    Well, we can pretend that we've asked numad for its suggestion and let
    qemu command line be built with that respect. Again, this alone has no
    big value, but see later commits which build on the top of this.
    
    Signed-off-by: Michal Privoznik <mprivozn>

commit 65c0fd9dfc712d23721e8052ce655100e230a3b3
Author:     Michal Privoznik <mprivozn>
AuthorDate: Thu Feb 12 17:37:46 2015 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Tue Feb 17 08:38:19 2015 +0100

    numatune_conf: Expose virDomainNumatuneNodeSpecified
    
    This function is going to be needed in the near future.
    
    Signed-off-by: Michal Privoznik <mprivozn>

v1.2.12-143-g7832fac

Comment 16 Jiri Denemark 2015-02-24 12:52:52 UTC
Fixing bug summary because this bug is a bit more general and affects all machine types. Migrating any domain with the following XML from RHEL 6 or 7.0 to 7.1 will fail:

   <numatune>
     <memory mode='strict' nodeset='0'/>
   </numatune>
   <cpu>
     <numa>
       <cell id='0' cpus='0-1' memory='2048000'/>
     </numa>
   </cpu>

Comment 17 Jiri Denemark 2015-04-29 14:27:17 UTC
*** Bug 1185786 has been marked as a duplicate of this bug. ***

Comment 19 zhe peng 2015-06-24 06:48:40 UTC
I can reproduce this.
verify with build:
libvirt-1.2.13-1.el7.x86_64

1:prepare a guest with xml:
<numatune>
    <memory mode='strict' nodeset='0'/>
  </numatune>
...
<os>
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <boot dev='hd'/>
  </os>
...
<cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='1024000' unit='KiB'/>
    </numa>
....
2:start the guest
# virsh start rhel6
Domain rhel6 started

3:check qemu cmd:
.....
-numa node,nodeid=0,cpus=0-1,mem=1000

4:prepare a rhel6 host,create a guest with same setting.
5:migrate the guest to rhel7 host
# virsh migrate --live rhel6 qemu+ssh://$target_ip/system --verbose
Migration: [100 %]

6: check target qemu cmd:
-numa node,nodeid=0,cpus=0-1,mem=1000

move to verified.

Comment 21 errata-xmlrpc 2015-11-19 06:14:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.