RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1351409 - When hotplug memory, guest will shutdown as Insufficient free host memory pages available to allocate
Summary: When hotplug memory, guest will shutdown as Insufficient free host memory pag...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Igor Mammedov
QA Contact: Yumei Huang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-30 03:41 UTC by yalzhang@redhat.com
Modified: 2016-11-07 21:20 UTC (History)
10 users (show)

Fixed In Version: qemu-kvm-rhev-2.6.0-21.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-07 21:20:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 0 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description yalzhang@redhat.com 2016-06-30 03:41:25 UTC
Description of problem:
Guest will shutdown when hotplug memory as Insufficient free host memory pages available to allocate

Version-Release number of selected component (if applicable):
libvirt-1.3.5-1.el7.x86_64
qemu-kvm-rhev-2.6.0-9.el7.x86_64

How reproducible:
90%

Steps to Reproduce:
1. Host has 4 hugepages and the size is 1G.
# cat /proc/meminfo | grep Huge
AnonHugePages:    186368 kB
HugePages_Total:       4
HugePages_Free:        4
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

prepare a guest with maxmemory defined and use hugepage
# virsh dumpxml r7.1
......
 <maxMemory slots='16' unit='KiB'>25600000</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB' nodeset='0'/>
    </hugepages>
  </memoryBacking>
......
  <cpu>
.....
    <numa>
      <cell id='0' cpus='0-7' memory='1048576' unit='KiB'/>
      <cell id='1' cpus='8-15' memory='1048576' unit='KiB'/>
    </numa>
  </cpu>
......
2. attach memory devices with source defined and pagesize=1G
# cat dev1g.xml
<memory model='dimm'>
<source>
<pagesize unit='KiB'>1048576</pagesize>
<nodemask>1</nodemask>
</source>
<target>
<size unit='MiB'>1024</size>
<node>0</node>
</target>
</memory>

# virsh start r7.1
Domain r7.1 started

# virsh attach-device r7.1 dev1g.xml
Device attached successfully

# virsh attach-device r7.1 dev1g.xml
Device attached successfully

# virsh attach-device r7.1 dev1g.xml
error: Failed to attach device from dev1g.xml
error: Unable to read from monitor: Connection reset by peer

# virsh domstate r7.1
shut off

3. check guest log
# cat /var/log/libvirt/qemu/r7.1.log
......
os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM
2016-06-30 02:42:53.682+0000: shutting down

Actual results:
The guest shutdown when there is insufficient free host memory pages available to allocate guest RAM.

Expected results:
The guest should not shutdown, the virsh command should report error, such as "Insufficient free host memory pages available to allocate guest RAM"

Additional info:

Comment 2 Igor Mammedov 2016-07-01 08:06:05 UTC
Pls, provide command line used to start QEMU and QMP command used to hotplug memory

Comment 3 yalzhang@redhat.com 2016-07-01 11:18:51 UTC
# ps -aux | grep r7.1
qemu     24980 49.8  0.1 2563696 165632 ?      Sl   18:56   0:16 /usr/libexec/qemu-kvm -name guest=r7.1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-42-r7.1/master-key.aes -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off 

-m size=2097152k,slots=16,maxmem=25600000k 

-realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 

-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages1G/libvirt/qemu,size=1073741824

-numa node,nodeid=0,cpus=0-1,memdev=ram-node0 

-object memory-backend-ram,id=ram-node1,size=1073741824 

-numa node,nodeid=1,cpus=2-3,memdev=ram-node1 

-uuid b87330b8-a5dc-4fd1-b307-b504debdcca7 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-42-r7.1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/r7.1.img,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 -msg timestamp=on

# cat /proc/meminfo | grep Huge
AnonHugePages:   1175552 kB
HugePages_Total:       4
HugePages_Free:        3
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

#  virsh  qemu-monitor-command r7.1 --pretty  '{"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memdimm0","props":{"prealloc":true,"mem-path":"/dev/hugepages1G/libvirt/qemu","size":1073741824,"host-nodes":[1],"policy":"bind"}}}'
{
  "return": {

  },
  "id": "libvirt-13"
}


#  virsh  qemu-monitor-command r7.1 --pretty  '{"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memdimm1","props":{"prealloc":true,"mem-path":"/dev/hugepages1G/libvirt/qemu","size":1073741824,"host-nodes":[1],"policy":"bind"}}}'
{
  "return": {

  },
  "id": "libvirt-14"
}

#  virsh  qemu-monitor-command r7.1 --pretty  '{"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memdimm2","props":{"prealloc":true,"mem-path":"/dev/hugepages1G/libvirt/qemu","size":1073741824,"host-nodes":[1],"policy":"bind"}}}'
error: Unable to read from monitor: Connection reset by peer

# virsh domstate r7.1
shut off

Comment 4 Igor Mammedov 2016-07-15 14:35:12 UTC
I can't reproduce it locally,
could you prove an access to system where bug reproduces?

Comment 5 yalzhang@redhat.com 2016-07-18 02:29:39 UTC
Hi Juzhang and Igor,

I have sent a mail just now to provide the access info to the system. And r7.1 is ready to use to reproduce the issue. Please have a look, thank you very much!

Comment 6 Igor Mammedov 2016-08-09 13:33:32 UTC
Fixed upstream (2.7):

056b68af fix qemu exit on memory hotplug when allocation fails at prealloc time

Comment 7 Miroslav Rezanina 2016-08-16 11:22:50 UTC
Fix included in qemu-kvm-rhev-2.6.0-21.el7

Comment 9 Yumei Huang 2016-09-08 07:10:33 UTC
Reproduce:
qemu-kvm-rhev-2.6.0-2.el7
kernel-3.10.0-497.el7.x86_64

Steps:
1. prepare host env
# cat /proc/meminfo  | grep -i huge
AnonHugePages:      8192 kB
HugePages_Total:       4
HugePages_Free:        4
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

# cat /sys/devices/system/node/node*/hugepages/hugepages-1048576kB/nr_hugepages 
1
1
1
1

# mount
hugetlbfs on /dev/hugepages/libvirt/qemu type hugetlbfs (rw,relatime,seclabel,pagesize=1G)

2. Boot guest
# /usr/libexec/qemu-kvm -name guest=aa,debug-threads=on  \

-m 2048,slots=16,maxmem=20G -realtime mlock=off \

-smp 16,sockets=16,cores=1,threads=1 \

-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1073741824 -numa node,nodeid=0,cpus=0-7,memdev=ram-node0\

-object memory-backend-ram,id=ram-node1,size=1073741824 -numa node,nodeid=1,cpus=8-15,memdev=ram-node1  \

-uuid 89a142dc-9ba8-4848-b8de-3b40a6ed4a73 -no-user-config -nodefaults \

-drive file=/home/guest/rhel73.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1  \

-spice port=5900,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga -monitor stdio   

3. hotplug 1g hugepage memory twice, with prealloc=yes and policy=bind
(qemu) object_add memory-backend-file,mem-path=/dev/hugepages/libvirt/qemu,size=1G,prealloc=yes,host-nodes=1,policy=bind,id=mem0
(qemu) device_add pc-dimm,id=dimm0,memdev=mem0
(qemu) object_add memory-backend-file,mem-path=/dev/hugepages/libvirt/qemu,size=1G,prealloc=yes,host-nodes=1,policy=bind,id=mem1
os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM

QEMU quit and print "os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM". So the bug is reproduced.


Verify:
qemu-kvm-rhev-2.6.0-22.el7
kernel-3.10.0-497.el7.x86_64

With same steps as above, QEMU print "os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM", and guest work well. 

So the bug is fixed.

Comment 11 errata-xmlrpc 2016-11-07 21:20:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.