RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1324523 - -mem-prealloc option does not take effect when no huge page is allocated
Summary: -mem-prealloc option does not take effect when no huge page is allocated
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Luiz Capitulino
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1296800
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-06 14:18 UTC by Marcel Kolaja
Modified: 2016-05-04 17:59 UTC (History)
24 users (show)

Fixed In Version: qemu-kvm-rhev-2.3.0-31.el7_2.12
Doc Type: Bug Fix
Doc Text:
Prior to this update, when the qemu-kvm service was used with the -mem-prealloc option to allocate huge pages but the operation failed, qemu-kvm incorrectly reverted to regular RAM usage. Now, qemu-kvm exits in the described situation as expected.
Clone Of: 1296800
Environment:
Last Closed: 2016-05-04 17:59:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0719 0 normal SHIPPED_LIVE qemu-kvm-rhev bug fix update 2016-05-04 21:59:22 UTC

Description Marcel Kolaja 2016-04-06 14:18:07 UTC
This bug has been copied from bug #1296800 and has been proposed
to be backported to 7.2 z-stream (EUS).

Comment 3 Miroslav Rezanina 2016-04-14 04:57:16 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.12

Comment 5 Yumei Huang 2016-04-14 07:14:24 UTC
Reproduce:
kernel-3.10.0-373.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.11

steps:
1. 
# echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
# cat /proc/meminfo | grep -i huge
AnonHugePages:      8192 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

#mount -t hugetlbfs none /mnt/hugetlbfs/

2. boot guest with hugepage and -mem-prealloc 
#/usr/libexec/qemu-kvm -name rhel7.2-rt-355 -machine pc-i440fx-rhel7.2.0 -cpu IvyBridge  -smp 4,maxcpus=10 \

-drive file=/home/guest/rhel73.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,media=disk -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 \

-netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a1:d0:5f -monitor stdio -device qxl-vga,id=video0  -vnc :1  \

-m 4096,slots=5,maxmem=10G   -mem-prealloc -mem-path /mnt/hugetlbfs

Comment 6 Yumei Huang 2016-04-14 07:20:51 UTC
Reproduce:
kernel-3.10.0-373.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.11

With steps as comment5, HMP prints "(qemu) qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory", and guest works well. 
So the bug is reproduced.

Verify:
kernel-3.10.0-373.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.12

With same steps as comment5, HMP prints "(qemu) qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory", and qemu quit. 
So the bug is fixed.

Comment 7 Dan Zheng 2016-04-14 08:44:00 UTC
Test packages:
qemu-kvm-rhev-2.3.0-31.el7_2.12.x86_64
libvirt-1.3.3-1.el7.x86_64
kernel-3.10.0-327.el7.x86_64

************************************
Case1:
1. There is no huge page allocated.
# cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
0
# ll /dev/hugepages/libvirt/qemu 
total 0

2. Guest XMl uses below:

  <memoryBacking>
    <hugepages/>
  </memoryBacking>

3. Start the guest and failed as expected.
# virsh start d1
error: Failed to start domain d1
error: internal error: process exited while connecting to monitor: 2016-04-14T07:35:05.040519Z qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory

************************************
Case2:
1. Same as case 1 and use below xml:
  <memoryBacking>
    <hugepages>
      <page size='2' unit='MiB' nodeset='0'/>
    </hugepages>
  </memoryBacking>

2. Start the guest and failed as expected.
# virsh start d1
error: Failed to start domain d1
error: internal error: process exited while connecting to monitor: 2016-04-14T07:35:05.040519Z qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory


************************************
Case3:

1. Disable hugePage in conf
# vim /etc/libvirt/qemu.conf
...
hugetlbfs_mount = ""

2. Guest XML:
  <memoryBacking>
    <hugepages/>
  </memoryBacking>

3. # virsh start d1
error: Failed to start domain d1
error: internal error: hugetlbfs filesystem is not mounted or disabled by administrator config

************************************
Case4:
1. # cat /etc/libvirt/qemu.conf
hugetlbfs_mount = "/dev/hugepages"

2. #mount -t hugetlbfs hugetlbfs /dev/hugepages

3. Reserve memory for huge pages, e.g:
#sysctl vm.nr_hugepages=600

4. Restart libvirtd service
# systemctl restart libvirtd
# systemctl restart virtlogd.socket

5. cat meminfo before guest start
# more /proc/meminfo |grep Huge
AnonHugePages:    829440 kB
HugePages_Total:     567
HugePages_Free:      567
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

6. start the geust 
Domain d1 started
# more /proc/meminfo |grep Huge
AnonHugePages:    843776 kB
HugePages_Total:     567
HugePages_Free:       55
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

That is the guest is using hugepage. 
(567-55) * 2M = 1024M <--->  <memory unit='KiB'>1048576</memory>

Comment 10 errata-xmlrpc 2016-05-04 17:59:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0719.html


Note You need to log in before you can comment on or make changes to this bug.