Bug 1351409
Summary: | When hotplug memory, guest will shutdown as Insufficient free host memory pages available to allocate | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | yalzhang <yalzhang> |
Component: | qemu-kvm-rhev | Assignee: | Igor Mammedov <imammedo> |
Status: | CLOSED ERRATA | QA Contact: | Yumei Huang <yuhuang> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.3 | CC: | chayang, hhuang, huding, juzhang, knoel, lhuang, virt-maint, yafu, yalzhang, yuhuang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-rhev-2.6.0-21.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-07 21:20:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
yalzhang@redhat.com
2016-06-30 03:41:25 UTC
Pls, provide command line used to start QEMU and QMP command used to hotplug memory # ps -aux | grep r7.1 qemu 24980 49.8 0.1 2563696 165632 ? Sl 18:56 0:16 /usr/libexec/qemu-kvm -name guest=r7.1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-42-r7.1/master-key.aes -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -m size=2097152k,slots=16,maxmem=25600000k -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages1G/libvirt/qemu,size=1073741824 -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -object memory-backend-ram,id=ram-node1,size=1073741824 -numa node,nodeid=1,cpus=2-3,memdev=ram-node1 -uuid b87330b8-a5dc-4fd1-b307-b504debdcca7 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-42-r7.1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/r7.1.img,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 -msg timestamp=on # cat /proc/meminfo | grep Huge AnonHugePages: 1175552 kB HugePages_Total: 4 HugePages_Free: 3 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB # virsh qemu-monitor-command r7.1 --pretty '{"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memdimm0","props":{"prealloc":true,"mem-path":"/dev/hugepages1G/libvirt/qemu","size":1073741824,"host-nodes":[1],"policy":"bind"}}}' { "return": { }, "id": "libvirt-13" } # virsh qemu-monitor-command r7.1 --pretty '{"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memdimm1","props":{"prealloc":true,"mem-path":"/dev/hugepages1G/libvirt/qemu","size":1073741824,"host-nodes":[1],"policy":"bind"}}}' { "return": { }, "id": "libvirt-14" } # virsh qemu-monitor-command r7.1 --pretty '{"execute":"object-add","arguments":{"qom-type":"memory-backend-file","id":"memdimm2","props":{"prealloc":true,"mem-path":"/dev/hugepages1G/libvirt/qemu","size":1073741824,"host-nodes":[1],"policy":"bind"}}}' error: Unable to read from monitor: Connection reset by peer # virsh domstate r7.1 shut off I can't reproduce it locally, could you prove an access to system where bug reproduces? Hi Juzhang and Igor, I have sent a mail just now to provide the access info to the system. And r7.1 is ready to use to reproduce the issue. Please have a look, thank you very much! Fixed upstream (2.7): 056b68af fix qemu exit on memory hotplug when allocation fails at prealloc time Fix included in qemu-kvm-rhev-2.6.0-21.el7 Reproduce: qemu-kvm-rhev-2.6.0-2.el7 kernel-3.10.0-497.el7.x86_64 Steps: 1. prepare host env # cat /proc/meminfo | grep -i huge AnonHugePages: 8192 kB HugePages_Total: 4 HugePages_Free: 4 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB # cat /sys/devices/system/node/node*/hugepages/hugepages-1048576kB/nr_hugepages 1 1 1 1 # mount hugetlbfs on /dev/hugepages/libvirt/qemu type hugetlbfs (rw,relatime,seclabel,pagesize=1G) 2. Boot guest # /usr/libexec/qemu-kvm -name guest=aa,debug-threads=on \ -m 2048,slots=16,maxmem=20G -realtime mlock=off \ -smp 16,sockets=16,cores=1,threads=1 \ -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=1073741824 -numa node,nodeid=0,cpus=0-7,memdev=ram-node0\ -object memory-backend-ram,id=ram-node1,size=1073741824 -numa node,nodeid=1,cpus=8-15,memdev=ram-node1 \ -uuid 89a142dc-9ba8-4848-b8de-3b40a6ed4a73 -no-user-config -nodefaults \ -drive file=/home/guest/rhel73.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \ -spice port=5900,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga -monitor stdio 3. hotplug 1g hugepage memory twice, with prealloc=yes and policy=bind (qemu) object_add memory-backend-file,mem-path=/dev/hugepages/libvirt/qemu,size=1G,prealloc=yes,host-nodes=1,policy=bind,id=mem0 (qemu) device_add pc-dimm,id=dimm0,memdev=mem0 (qemu) object_add memory-backend-file,mem-path=/dev/hugepages/libvirt/qemu,size=1G,prealloc=yes,host-nodes=1,policy=bind,id=mem1 os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM QEMU quit and print "os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM". So the bug is reproduced. Verify: qemu-kvm-rhev-2.6.0-22.el7 kernel-3.10.0-497.el7.x86_64 With same steps as above, QEMU print "os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM", and guest work well. So the bug is fixed. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2673.html |