RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1279387 - hugepage could not be used inside guest if start the guest with NUMA supported huge pages [7.2.z]
Summary: hugepage could not be used inside guest if start the guest with NUMA supporte...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: ppc64le
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: David Gibson
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1265576
Blocks: RHEV3.6PPC
TreeView+ depends on / blocked
 
Reported: 2015-11-09 11:08 UTC by Jan Kurik
Modified: 2016-07-27 20:30 UTC (History)
15 users (show)

Fixed In Version: qemu-kvm-rhev-2.3.0-31.el7_2.2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1265576
Environment:
Last Closed: 2015-12-07 21:42:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2555 0 normal SHIPPED_LIVE qemu-kvm-rhev bug fix update 2015-12-08 02:42:08 UTC

Description Jan Kurik 2015-11-09 11:08:35 UTC
This bug has been copied from bug #1265576 and has been proposed
to be backported to 7.2 z-stream (EUS).

Comment 4 Xujun Ma 2015-11-17 03:36:52 UTC
Verified the issue on the scratch build:
Version-Release number of selected component (if applicable):
Guest kernel: 3.10.0-327.e17.ppc64le
Host kernel:  3.10.0-327.el7.ppc64le
qemu-kvm-rhev-2.3.0-31.el7.next.candidate.ppc64le


Steps to Reproduce:
1.mount hugepage ,allocate hugepages ,and check hugepages on the host:
# mount -t hugetlbfs hugetlbfs /dev/hugepages -o pagesize=16M
# echo 256 > /proc/sys/vm/nr_hugepages 
# cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     256
HugePages_Free:      256
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

2. start a guest with command:
/usr/libexec/qemu-kvm \
 -m 4096M -smp 4 -monitor stdio -qmp tcp::8889,server,nowait  -vnc :26\
 -boot menu=on \
 -rtc base=utc,clock=vm \
 -netdev tap,id=tap0,script=/etc/qemu-ifup \
 -device virtio-net-pci,netdev=tap0,bootindex=3,id=net0,mac=24:be:05:11:92:11 \
 -drive file=sys1.qcow2,if=none,id=drive-0-0-0,format=qcow2,cache=none \
 -device virtio-blk-pci,drive=drive-0-0-0,bootindex=0,id=scsi0-0-0-0  \
 -device virtio-scsi-pci\
 -device scsi-cd,id=scsi-cd1,drive=scsi-cd1-dr,bootindex=1 \
 -drive file=RHEL-7.2-20151030.0-Server-ppc64le-dvd1.iso,if=none,id=scsi-cd1-dr,readonly=on,format=raw,cache=none \
 -object memory-backend-file,host-nodes=0,policy=interleave,id=mem-0,size=2048M,prealloc=yes,mem-path=/dev/hugepages\
 -numa node,memdev=mem-0,nodeid=1 \
 -object memory-backend-file,host-nodes=1,policy=interleave,id=mem-1,size=2048M,prealloc=yes,mem-path=/dev/hugepages \
 -numa node,memdev=mem-1,nodeid=0 \

3. check hugepages on the host :
# cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     256
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

4. check memory in guest :
#mount -t hugetlbfs hugetlbfs /mnt
#echo 256 > /proc/sys/vm/nr_hugepages
#cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     179
HugePages_Free:      179
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

# numactl  -H
available: 2 nodes (0-1)
node 0 cpus: 0 2
node 0 size: 2048 MB
node 0 free: 1239 MB
node 1 cpus: 1 3
node 1 size: 2048 MB
node 1 free: 1795 MB
node distances:
node   0   1 
  0:  10  40 
  1:  40  10 


Results: The hugepage can be used in guest if start the guest with NUMA supported 

so the bug has been fixed.

Comment 5 Miroslav Rezanina 2015-11-18 10:07:44 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.2

Comment 6 Xujun Ma 2015-11-20 03:24:26 UTC
Reproduced the issue on old version:

Version-Release number of selected component (if applicable):
Guest kernel: 3.10.0-327.el7.ppc64le
qemu-kvm-rhev:qemu-kvm-rhev-2.3.0-23.el7.ppc64le
Host kernel:3.10.0-316.el7.ppc64le


Steps to Reproduce:
1.mount hugepage ,allocate hugepages ,and check hugepages on the host:
# mount -t hugetlbfs hugetlbfs /dev/hugepages -o pagesize=16M
# echo 128 > /proc/sys/vm/nr_hugepages 
# cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     128
HugePages_Free:      128
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

2. start a guest with command:
/usr/libexec/qemu-kvm \
 -m 2048M -smp 4 -monitor stdio -qmp tcp::8889,server,nowait  -vnc :26\
 -boot menu=on \
 -rtc base=utc,clock=vm \
 -netdev tap,id=tap0,script=/etc/qemu-ifup \
 -device virtio-net-pci,netdev=tap0,bootindex=3,id=net0,mac=24:be:05:11:92:11 \
 -drive file=sys1.qcow2,if=none,id=drive-0-0-0,format=qcow2,cache=none \
 -device virtio-blk-pci,drive=drive-0-0-0,bootindex=0,id=scsi0-0-0-0  \
 -device virtio-scsi-pci\
 -device scsi-cd,id=scsi-cd1,drive=scsi-cd1-dr,bootindex=1 \
 -drive file=RHEL-7.2-20151030.0-Server-ppc64le-dvd1.iso,if=none,id=scsi-cd1-dr,readonly=on,format=raw,cache=none \
 -object memory-backend-file,host-nodes=0,policy=interleave,id=mem-0,size=2048M,prealloc=yes,mem-path=/dev/hugepages\
 -numa node,memdev=mem-0,nodeid=0\


3. check hugepages on the host :
# cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     128
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

4. check memory in guest :
#mount -t hugetlbfs hugetlbfs /mnt
mount: unknown filesystem type 'hugetlbfs'

#mount -t hugetlbfs hugetlbfs /dev/hugepages -o pagesize=16M
mount: mount point /dev/hugepages does not exist

#echo 256 > /proc/sys/vm/nr_hugepages
-bash: echo: write error: Success

#cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB

# numactl  -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 2048 MB
node 0 free: 1092 MB
node distances:
node   0 
  0:  10


Results: The hugepage can't be used in guest if start the guest with NUMA supported 


Verified the issue on the latest build:
Version-Release number of selected component (if applicable):
Guest kernel: 3.10.0-327.el7.ppc64le
qemu-kvm-rhev:qemu-img-rhev-2.3.0-31.el7_2.2.ppc64le
Host kernel:3.10.0-327.el7.ppc64le


Steps to Reproduce:
1.mount hugepage ,allocate hugepages ,and check hugepages on the host:
# mount -t hugetlbfs hugetlbfs /dev/hugepages -o pagesize=16M
# echo 128 > /proc/sys/vm/nr_hugepages 
# cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     128
HugePages_Free:      128
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

2. start a guest with command:
/usr/libexec/qemu-kvm \
 -m 2048M -smp 4 -monitor stdio -qmp tcp::8889,server,nowait  -vnc :26\
 -boot menu=on \
 -rtc base=utc,clock=vm \
 -netdev tap,id=tap0,script=/etc/qemu-ifup \
 -device virtio-net-pci,netdev=tap0,bootindex=3,id=net0,mac=24:be:05:11:92:11 \
 -drive file=sys1.qcow2,if=none,id=drive-0-0-0,format=qcow2,cache=none \
 -device virtio-blk-pci,drive=drive-0-0-0,bootindex=0,id=scsi0-0-0-0  \
 -device virtio-scsi-pci\
 -device scsi-cd,id=scsi-cd1,drive=scsi-cd1-dr,bootindex=1 \
 -drive file=RHEL-7.2-20151030.0-Server-ppc64le-dvd1.iso,if=none,id=scsi-cd1-dr,readonly=on,format=raw,cache=none \
 -object memory-backend-file,host-nodes=0,policy=interleave,id=mem-0,size=2048M,prealloc=yes,mem-path=/dev/hugepages\
 -numa node,memdev=mem-0,nodeid=0\


3. check hugepages on the host :
# cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:     128
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

4. check memory in guest :
#mount -t hugetlbfs hugetlbfs /mnt

#mount -t hugetlbfs hugetlbfs /dev/hugepages -o pagesize=16M

#echo 128 > /proc/sys/vm/nr_hugepages

#cat /proc/meminfo |grep -i HugePages
AnonHugePages:         0 kB
HugePages_Total:      64
HugePages_Free:       64
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      16384 kB

# numactl  -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 2048 MB
node 0 free: 176 MB
node distances:
node   0 
  0:  10


Results: The hugepage can be used in guest if start the guest with NUMA supported ,so the bug has beed fixed in qemu-kvm-rhev-2.3.0-31.el7_2.2

Comment 10 errata-xmlrpc 2015-12-07 21:42:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2555.html


Note You need to log in before you can comment on or make changes to this bug.