Bug 1324523
| Summary: | -mem-prealloc option does not take effect when no huge page is allocated | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Marcel Kolaja <mkolaja> |
| Component: | qemu-kvm-rhev | Assignee: | Luiz Capitulino <lcapitulino> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.3 | CC: | chayang, dyuan, dzheng, hhuang, huding, jherrman, jsuchane, juzhang, knoel, lcapitulino, lhuang, lmiksik, mkolaja, mrezanin, mzhan, sgordon, sherold, snagar, tlavigne, virt-maint, xfu, yafu, yuhuang, zpeng |
| Target Milestone: | rc | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | qemu-kvm-rhev-2.3.0-31.el7_2.12 | Doc Type: | Bug Fix |
| Doc Text: |
Prior to this update, when the qemu-kvm service was used with the -mem-prealloc option to allocate huge pages but the operation failed, qemu-kvm incorrectly reverted to regular RAM usage. Now, qemu-kvm exits in the described situation as expected.
|
Story Points: | --- |
| Clone Of: | 1296800 | Environment: | |
| Last Closed: | 2016-05-04 17:59:45 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1296800 | ||
| Bug Blocks: | |||
|
Description
Marcel Kolaja
2016-04-06 14:18:07 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.12 Reproduce: kernel-3.10.0-373.el7.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.11 steps: 1. # echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages # cat /proc/meminfo | grep -i huge AnonHugePages: 8192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB #mount -t hugetlbfs none /mnt/hugetlbfs/ 2. boot guest with hugepage and -mem-prealloc #/usr/libexec/qemu-kvm -name rhel7.2-rt-355 -machine pc-i440fx-rhel7.2.0 -cpu IvyBridge -smp 4,maxcpus=10 \ -drive file=/home/guest/rhel73.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,media=disk -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 \ -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a1:d0:5f -monitor stdio -device qxl-vga,id=video0 -vnc :1 \ -m 4096,slots=5,maxmem=10G -mem-prealloc -mem-path /mnt/hugetlbfs Reproduce: kernel-3.10.0-373.el7.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.11 With steps as comment5, HMP prints "(qemu) qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory", and guest works well. So the bug is reproduced. Verify: kernel-3.10.0-373.el7.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.12 With same steps as comment5, HMP prints "(qemu) qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory", and qemu quit. So the bug is fixed. Test packages:
qemu-kvm-rhev-2.3.0-31.el7_2.12.x86_64
libvirt-1.3.3-1.el7.x86_64
kernel-3.10.0-327.el7.x86_64
************************************
Case1:
1. There is no huge page allocated.
# cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
0
# ll /dev/hugepages/libvirt/qemu
total 0
2. Guest XMl uses below:
<memoryBacking>
<hugepages/>
</memoryBacking>
3. Start the guest and failed as expected.
# virsh start d1
error: Failed to start domain d1
error: internal error: process exited while connecting to monitor: 2016-04-14T07:35:05.040519Z qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory
************************************
Case2:
1. Same as case 1 and use below xml:
<memoryBacking>
<hugepages>
<page size='2' unit='MiB' nodeset='0'/>
</hugepages>
</memoryBacking>
2. Start the guest and failed as expected.
# virsh start d1
error: Failed to start domain d1
error: internal error: process exited while connecting to monitor: 2016-04-14T07:35:05.040519Z qemu-kvm: unable to map backing store for hugepages: Cannot allocate memory
************************************
Case3:
1. Disable hugePage in conf
# vim /etc/libvirt/qemu.conf
...
hugetlbfs_mount = ""
2. Guest XML:
<memoryBacking>
<hugepages/>
</memoryBacking>
3. # virsh start d1
error: Failed to start domain d1
error: internal error: hugetlbfs filesystem is not mounted or disabled by administrator config
************************************
Case4:
1. # cat /etc/libvirt/qemu.conf
hugetlbfs_mount = "/dev/hugepages"
2. #mount -t hugetlbfs hugetlbfs /dev/hugepages
3. Reserve memory for huge pages, e.g:
#sysctl vm.nr_hugepages=600
4. Restart libvirtd service
# systemctl restart libvirtd
# systemctl restart virtlogd.socket
5. cat meminfo before guest start
# more /proc/meminfo |grep Huge
AnonHugePages: 829440 kB
HugePages_Total: 567
HugePages_Free: 567
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
6. start the geust
Domain d1 started
# more /proc/meminfo |grep Huge
AnonHugePages: 843776 kB
HugePages_Total: 567
HugePages_Free: 55
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
That is the guest is using hugepage.
(567-55) * 2M = 1024M <---> <memory unit='KiB'>1048576</memory>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0719.html |