RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1175449 - Huge pages: "libvirtError: internal error: Unable to find any usable hugetlbfs mount for 4 KiB"
Summary: Huge pages: "libvirtError: internal error: Unable to find any usable hugetlbf...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1173507
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-17 18:41 UTC by Stephen Gordon
Modified: 2015-11-19 06:04 UTC (History)
19 users (show)

Fixed In Version: libvirt-1.2.13-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1173507
Environment:
Last Closed: 2015-11-19 06:04:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Stephen Gordon 2014-12-17 18:41:40 UTC
+++ This bug was initially created as a clone of Bug #1173507 +++

Description of the problem
--------------------------

This occurred when booting a guest with a Nova flavor defined with huge pages size set to 'any' -- this is work in progress upstream[*] -- results in:

    libvirtError: internal error: Unable to find any usable hugetlbfs mount for 4 KiB


From Nova Conductor logs:

. . .
2014-12-11 13:06:34.738 ERROR nova.scheduler.utils [req-7812c740-ec60-461e-a6b7-66b4bd4359ee admin admin] [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c
9fb] Error from last host: fedvm1 (node fedvm1): [u'Traceback (most recent call last):\n', u'  File "/home/kashyapc/src/cloud/nova/nova/compute/manage
r.py", line 2060, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/home/kashyapc/src/cloud/nova/nova/compute/manager.py", line 220
0, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance c8e1093b-81d6-4
bc8-a319-7a8ea384c9fb was re-scheduled: internal error: Unable to find any usable hugetlbfs mount for 4 KiB\n']
. . .

 
[*] http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-large-pages.html#proposed-change


Version
-------

Apply the virt-driver-large-pages patch series to Nova git, and test via
DevStack:

    https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virt-driver-large-pages,n,z

    $ git log | grep "commit\ " | head -8
    commit c0c5d6a497c0e275e6f2037c1f7d45983a077cbc
    commit 9d1d59bd82a7f2747487884d5880270bfdc9734a
    commit eda126cce41fd5061b630a1beafbf5c37292946e
    commit 6980502683bdcf514b386038ca0e0ef8226c27ca
    commit b1ddc34efdba271f406a6db39c8deeeeadcb8cc9
        This commit also add a new exceptions MemoryPageSizeInvalid and
    commit 2fcfc675aa04ef2760f0e763697c73b6d90a4fca
    commit 567987035bc3ef685ea09ac2b82be55aa5e23ca5

    $ git describe
    2014.2-1358-gc0c5d6a


libvirt version: libvirt-1.2.11 (built from libvirt git)

    $ git log | head -1 
    commit a2a35d0164f4244b9c6f143f54e9bb9f3c9af7d3a
    $ git describe
    CVE-2014-7823-247-ga2a35d0



Steps to Reproduce
------------------

Test environment: I was testing Nova huge pages in a DevStack VM with KVM
nested virtualization, i.e. the Nova instances will be the nested guests.

Check if the 'hugetlbfs' is present in /proc filesystem:

    $ cat /proc/filesystems  | grep hugetlbfs
    nodev   hugetlbfs

Get the number of total huge pages:

    $ grep HugePages_Total /proc/meminfo
    HugePages_Total:     512

Get the number of free huge pages:

    $ grep HugePages_Free /proc/meminfo
    HugePages_Free:      512

Create flavor:

    nova flavor-create m1.hugepages 999 2048 1 4

Set extra_spec values for NUMA and Huge pages, with value as 'any':

    nova flavor-key m1.hugepages set hw:numa_nodes=1
    nova flavor-key m1.hugepages set hw:mem_page_size=any

Enumerate the newly created flavor properties:

    $ nova flavor-show m1.hugepages
    +----------------------------+-----------------------------------------------------+
    | Property                   | Value                                               |
    +----------------------------+-----------------------------------------------------+
    | OS-FLV-DISABLED:disabled   | False                                               |
    | OS-FLV-EXT-DATA:ephemeral  | 0                                                   |
    | disk                       | 1                                                   |
    | extra_specs                | {"hw:mem_page_size": "any", "hw:numa_nodes": "1"}   |
    | id                         | 999                                                 |
    | name                       | m1.hugepages                                        |
    | os-flavor-access:is_public | True                                                |
    | ram                        | 2048                                                |
    | rxtx_factor                | 1.0                                                 |
    | swap                       |                                                     |
    | vcpus                      | 4                                                   |
    +----------------------------+-----------------------------------------------------+


Boot a guest with the above falvor:


Actual results
--------------


(1) Contextual error messages from Nova Compute log (screen-n-cpu.log):

. . .
2014-12-11 13:06:34.141 ERROR nova.compute.manager [-] [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb] Instance failed to spawn
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb] Traceback (most recent call last):
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/home/kashyapc/src/cloud/nova/nova/compute
/manager.py", line 2282, in _build_resources
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     yield resources
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/home/kashyapc/src/cloud/nova/nova/compute
/manager.py", line 2152, in _build_and_run_instance
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     flavor=flavor)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/home/kashyapc/src/cloud/nova/nova/virt/li
bvirt/driver.py", line 2384, in spawn
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     block_device_info=block_device_info)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/home/kashyapc/src/cloud/nova/nova/virt/libvirt/driver.py", line 4278, in _create_domain_and_network
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     power_on=power_on)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/home/kashyapc/src/cloud/nova/nova/virt/libvirt/driver.py", line 4211, in _create_domain
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     LOG.error(err)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/usr/lib/python2.7/site-packages/oslo/utils/excutils.py", line 82, in __exit__
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     six.reraise(self.type_, self.value, self.tb)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/home/kashyapc/src/cloud/nova/nova/virt/libvirt/driver.py", line 4201, in _create_domain
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     domain.createWithFlags(launch_flags)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     rv = execute(f, *args, **kwargs)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     six.reraise(c, e, tb)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     rv = meth(*args, **kwargs)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1033, in createWithFlags
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2014-12-11 13:06:34.141 TRACE nova.compute.manager [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c9fb] libvirtError: internal error: Unable to find any usable hugetlbfs mount for 4 KiB
. . .


Expected results
----------------

As specified in the SPEC, Compute driver

Additional info
---------------

(2) Contextual error messages from Nova Conductor log (screen-n-cond.log):
----------------------------------------
. . .
2014-12-11 13:06:34.738 ERROR nova.scheduler.utils [req-7812c740-ec60-461e-a6b7-66b4bd4359ee admin admin] [instance: c8e1093b-81d6-4bc8-a319-7a8ea384c
9fb] Error from last host: fedvm1 (node fedvm1): [u'Traceback (most recent call last):\n', u'  File "/home/kashyapc/src/cloud/nova/nova/compute/manage
r.py", line 2060, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/home/kashyapc/src/cloud/nova/nova/compute/manager.py", line 220
0, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance c8e1093b-81d6-4
bc8-a319-7a8ea384c9fb was re-scheduled: internal error: Unable to find any usable hugetlbfs mount for 4 KiB\n']
. . .
----------------------------------------


(3) This error comes from libvirt, added in this commit:

    http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=281f70013e

--- Additional comment from Kashyap Chamarthy on 2014-12-12 05:30:41 EST ---

(In reply to Kashyap Chamarthy from comment #0)

[. . .]

> 
> Expected results
> ----------------
> 
> As specified in the SPEC, Compute driver

Broken sentence ^^ . I was referring to the OpenStack Nova spec[1]. 

In this case, expected  results: If huge pages are present, libvirt driver should find them, if not fallback to small pages"


  [1] http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-large-pages.html#proposed-change



Additional info
---------------

Posting an email comment from Michal Privoznik:

  This is a libvirt bug. What my patches allowed is to specify the huge
  pages size to be used either per whole domain or per domain's NUMA 
  node. When writing the patch I agreed with Dan that it's gonna be much
  more easier for OpenStack if libvirt would treat regular system pages
  and huge pages the same. Obviously, it isn't doing that. Although one
  may argue, that <hugepages/> element should accept huge pages only, it
  will make life easier for you guys if it would accept system pages
  too. Patch on its way.  @Kayshap,  Can you please open a bug

--- Additional comment from Kashyap Chamarthy on 2014-12-12 05:47:39 EST ---



--- Additional comment from Kashyap Chamarthy on 2014-12-12 05:51:17 EST ---



--- Additional comment from Kashyap Chamarthy on 2014-12-13 04:55:33 EST ---

Upstream patch  by Michal Privoznik

   http://www.redhat.com/archives/libvir-list/2014-December/msg00669.html

--- Additional comment from Michal Privoznik on 2014-12-15 10:47:23 EST ---

I've pushed patch upstream:

commit 311b4a677f60cc1a3a29c525a703b31ec47d95b5
Author:     Michal Privoznik <mprivozn>
AuthorDate: Fri Dec 12 10:37:35 2014 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Mon Dec 15 13:36:47 2014 +0100

    qemu: Allow system pages to <memoryBacking/>
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1173507
    
    It occurred to me that OpenStack uses the following XML when not using
    regular huge pages:
    
      <memoryBacking>
        <hugepages>
          <page size='4' unit='KiB'/>
        </hugepages>
      </memoryBacking>
    
    However, since we are expecting to see huge pages only, we fail to
    startup the domain with following error:
    
      libvirtError: internal error: Unable to find any usable hugetlbfs
      mount for 4 KiB
    
    While regular system pages are not huge pages technically, our code is
    prepared for that and if it helps OpenStack (or other management
    applications) we should cope with that.
    
    Signed-off-by: Michal Privoznik <mprivozn>

v1.2.11-12-g311b4a6

Comment 2 Jiri Denemark 2014-12-18 09:02:10 UTC
Btw, looking at the current version (47) of openstack patches at https://review.openstack.org/#/c/128703/47/nova/virt/libvirt/driver.py I don't see any need to backport the libvirt commit anywhere. In the current version openstack will not try to use normal pages in /domain/memoryBacking/hugepages and they need to keep that code there until the minimal required version of libvirt is 1.2.12. In other words, I suggest closing this as WONTFIX.

Comment 6 Luyao Huang 2015-05-29 06:49:58 UTC
I can reproduce this issue with libvirt-1.2.8-16.el7.x86_64:

# virsh dumpxml test3
...
  <memoryBacking>
    <hugepages>
      <page size='4' unit='KiB' nodeset='0'/>
    </hugepages>
  </memoryBacking>
...

# virsh start test3
error: Failed to start domain test3
error: internal error: Unable to find any usable hugetlbfs mount for 4 KiB

And verify this issue with libvirt-1.2.15-2.el7.x86_64:
1.
# virsh dumpxml test3
...
  <memoryBacking>
    <hugepages>
      <page size='4' unit='KiB' nodeset='0'/>
    </hugepages>
  </memoryBacking>
...

2.
# virsh start test3
Domain test3 started

3. no hugepage in qemu command line

# ps aux|grep qemu
...
-m 1024 -realtime mlock=off -smp 2,maxcpus=4,sockets=4,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024
...

Comment 8 errata-xmlrpc 2015-11-19 06:04:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.