RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1196644 - domains have hugepages settings cannot be migrated from RHEL hosts older than 7.1 to 7.1
Summary: domains have hugepages settings cannot be migrated from RHEL hosts older than...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1191567 1194982
Blocks: 1035038 1172230 1205796 1250959
TreeView+ depends on / blocked
 
Reported: 2015-02-26 12:50 UTC by Luyao Huang
Modified: 2015-11-19 06:18 UTC (History)
18 users (show)

Fixed In Version: libvirt-1.2.17-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1194982
Environment:
Last Closed: 2015-11-19 06:18:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Luyao Huang 2015-02-26 12:50:25 UTC
+++ This bug was initially created as a clone of Bug #1194982 +++

vm which have hugepages settings will cannot be migrated from RHEL hosts older than 7.1 to 7.1

and hugepage settings in /etc/libvirt/qemu.conf

hugetlbfs_mount = "/dev/hugepages"


--- Additional comment from Luyao Huang on 2015-02-26 04:08:04 EST ---

....

8. cross migration test for vm have hugepages settings in rhel6.6 host:
# virsh dumpxml r6
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
...
  <os>
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <boot dev='hd'/>
  </os>
...
  <cpu>
    <numa>
      <cell cpus='0-1' memory='1024000'/>
    </numa>
  </cpu>
...

9.migrate to rhel7.1 host(will failed but seems not numa node settings issue):
# virsh migrate r6 --live qemu+ssh://10.66.6.19/system
root.6.19's password: 
error: internal error: Unable to find any usable hugetlbfs mount for 0 KiB


So the problem has come:

Hi Michal,

would you please help to check out these two issue when i try to verify this bug, maybe both of them will affect the test result:

1. when i try to verify this issue with libvirt-1.2.8-16.el7_1.1.x86_64, i found cross migrate cannot success every times, steps was in verify step 3
and i can only find a warnning in libvirtd.log in target host(os is rhel6)

2015-02-26 08:25:29.796+0000: 302: warning : qemuDomainObjEnterMonitorInternal:1062 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous

Is this issue will affect this bug verify ?


2. i test cross migrate with hugepages, however i found i cannot migrate
a vm have hugepages settings from rhel6 to rhel7.1 or from rhel7.0 to rhel7.1, but it will work well if i do migrate from rhel6 to rhel7.0, the reason seems to be rhel7.1 libvirt forbid vm start with xml like this:
...
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
...

Is this issue need fix in rhel7.1.z? and is this issue will affect this bug verify ?

Thanks in advance for your answer!!

--- Additional comment from Luyao Huang on 2015-02-26 04:35:42 EST ---

r6 XML for issue 2:

<domain type='kvm' id='5'>
  <name>r6</name>
  <uuid>63b566d4-40e9-4152-b784-f46cc953abb0</uuid>
  <memory unit='KiB'>1024000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
...
  <os>
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <boot dev='hd'/>
  </os>
...
  <cpu>
    <numa>
      <cell cpus='0-1' memory='1024000'/>
    </numa>
  </cpu>
...
</domain>


--- Additional comment from Michal Privoznik on 2015-02-26 05:44:43 EST ---

(In reply to Luyao Huang from comment #9)

> Hi Michal,
> 
> would you please help to check out these two issue when i try to verify this
> bug, maybe both of them will affect the test result:
> 
> 1. when i try to verify this issue with libvirt-1.2.8-16.el7_1.1.x86_64, i
> found cross migrate cannot success every times, steps was in verify step 3
> and i can only find a warnning in libvirtd.log in target host(os is rhel6)
> 
> 2015-02-26 08:25:29.796+0000: 302: warning :
> qemuDomainObjEnterMonitorInternal:1062 : This thread seems to be the async
> job owner; entering monitor without asking for a nested job is dangerous

Despite what the message says, it's harmless.

> 
> Is this issue will affect this bug verify ?

That's okay and probably a qemu bug. If migration can finish successfully sometimes and sometimes not, it's likely to be a qemu bug anyway. So this is okay on libvirt side.

> 
> 
> 2. i test cross migrate with hugepages, however i found i cannot migrate
> a vm have hugepages settings from rhel6 to rhel7.1 or from rhel7.0 to
> rhel7.1, but it will work well if i do migrate from rhel6 to rhel7.0, the
> reason seems to be rhel7.1 libvirt forbid vm start with xml like this:
> ...
>   <memoryBacking>
>     <hugepages/>
>   </memoryBacking>
> ...
> 
> Is this issue need fix in rhel7.1.z? and is this issue will affect this bug
> verify ?

This is not okay, but not much related to this bug. So I suggest cloning this bug to cover this second part and let this original through.

Comment 1 Michal Privoznik 2015-03-31 14:19:57 UTC
I think is is already fixed by:

commit 732586d979738077af7e8b7dfd11d61fe46533c6
Author:     Michal Privoznik <mprivozn>
AuthorDate: Wed Jan 7 15:17:03 2015 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Wed Jan 7 18:32:07 2015 +0100

    qemu: Fix system pages handling in <memoryBacking/>
    
    In one of my previous commits (311b4a67) I've tried to allow to
    pass regular system pages to <hugepages>. However, there was a
    little bug that wasn't caught. If domain has guest NUMA topology
    defined, qemuBuildNumaArgStr() function takes care of generating
    corresponding command line. The hugepages backing for guest NUMA
    nodes is handled there too. And here comes the bug: the hugepages
    setting from XML is stored in KiB internally, however, the system
    pages size was queried and stored in Bytes. So the check whether
    these two are equal was failing even if it shouldn't.
    
    Signed-off-by: Michal Privoznik <mprivozn>

v1.2.11-113-g732586d

Comment 3 zhe peng 2015-06-24 08:37:13 UTC
Test with build: libvirt-1.2.16-1.el7.x86_64

migration always failed

try hugapge testing on rhel7.2

config hugapage on qemu.conf:
hugetlbfs_mount = "/dev/hugepages"

define a guest with xml:
<memoryBacking>
    <hugepages/>
  </memoryBacking>
....
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <boot dev='hd'/>
....
  <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='1024000' unit='KiB'/>
    </numa>
  </cpu>
...
start guest always get error:
# virsh start rhel6
error: Failed to start domain rhel6
error: internal error: Unable to find any usable hugetlbfs mount for 0 KiB

the hugepage can't worked on rhel7.2 build.

Hi Michal, need some special steps on rhel7.2 ? or it's a hugepage issue?
hugepage on rhel6 host worked well.

Comment 4 Michal Privoznik 2015-06-25 16:13:48 UTC
(In reply to zhe peng from comment #3)
> Test with build: libvirt-1.2.16-1.el7.x86_64
> 

> Hi Michal, need some special steps on rhel7.2 ? or it's a hugepage issue?
> hugepage on rhel6 host worked well.

Yes, this is a hugepage issue. Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2015-June/msg01348.html

Comment 5 Michal Privoznik 2015-06-26 07:28:57 UTC
I've just pushed patch upstream:

commit f8e9deb1d4c677eea7f22abef580ceb70765abae
Author:     Michal Privoznik <mprivozn>
AuthorDate: Wed Jun 24 18:09:57 2015 +0200
Commit:     Michal Privoznik <mprivozn>
CommitDate: Fri Jun 26 09:15:26 2015 +0200

    qemuBuildMemoryBackendStr: Fix hugepages lookup process
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1196644
    
    This function constructs the backend (host facing) part of the
    memory device.  At the beginning, the configured hugepages are
    searched to find the best match for given guest NUMA node.
    Configured hugepages can have a @nodeset attribute to specify on
    which guest NUMA nodes should be the hugepages backing used.
    There is, however, one 'corner case'. Users may just tell 'use
    hugepages to back all the nodes'. In other words:
    
      <memoryBacking>
        <hugepages/>
      </memoryBacking>
    
      <cpu>
        <numa>
          <cell id='0' cpus='0-1' memory='1024000' unit='KiB'/>
        </numa>
      </cpu>
    
    Our code fails in this case. Well, since there's no @nodeset (nor
    any <page/> child element to <hugepages/>) we fail to lookup the
    default hugepage size to use.
    
    Signed-off-by: Michal Privoznik <mprivozn>

v1.2.16-316-gf8e9deb

Comment 6 zhe peng 2015-07-23 08:21:20 UTC
verify with build:
libvirt-1.2.17-2.el7.x86_64

step:
1:prepare a guest with hugepage setting on rhel6.6 host
2:#virsh dumpxml rhel6
....
 <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <boot dev='hd'/>
  </os>
....
3:start guest and do migration to rhel7.2 host
# virsh migrate --live rhel6 qemu+ssh://$target_ip/system --verbose
Migration: [100 %]
4:check on target host
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 5     rhel6                          running
#virsh dumpxml rhel6
....
 <memoryBacking>
    <hugepages/>
  </memoryBacking>
....
<os>
    <type arch='x86_64' machine='rhel6.5.0'>hvm</type>
    <boot dev='hd'/>
  </os>
....
check qemu cmd:
....
qemu      6755     1  2 16:16 ?        00:00:02 /usr/libexec/qemu-kvm -name rhel6 -S -machine rhel6.5.0,accel=kvm,usb=off -m 500 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu ....

5: migrate guest back to rhel6.6 host
# virsh migrate rhel6 --live qemu+ssh://$source_ip/system --verbose
Migration: [100 %]

move to verified.

Comment 8 errata-xmlrpc 2015-11-19 06:18:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.