Hide Forgot
Description of problem: Qemu quits directly when boot guest with no enough hugepages allocated. As https://bugzilla.redhat.com/show_bug.cgi?id=1329086#c3 says, qemu should continue when no enough hugepages allocated. Version-Release number of selected component (if applicable): qemu-kvm-rhev-2.5.0-4.el7 kernel-3.10.0-373.el7.x86_64 How reproducible: always Steps to Reproduce: 1. allocate 600M hugepages on host #echo 300 > /proc/sys/vm/nr_hugepages # cat /proc/meminfo | grep -i hugepage AnonHugePages: 6144 kB HugePages_Total: 300 HugePages_Free: 300 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB # mount none on /mnt/kvm_hugepage type hugetlbfs (rw,relatime,seclabel,pagesize=2048K) 2. boot guest with 1G hugepage /usr/libexec/qemu-kvm -m 1G,slots=4,maxmem=32G -smp 4 \ -object memory-backend-file,mem-path=/mnt/kvm_hugepage,size=1G,id=mem-mem1 -device pc-dimm,id=dimm-mem1,memdev=mem-mem1 \ -drive file=/home/guest/RHEL-Server-7.3-64-virtio.qcow2,id=drive-virtio-disk1,media=disk,cache=none,snapshot=off,format=qcow2,aio=native,if=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,bootindex=0 -monitor stdio -vnc :0 3. Actual results: Qemu quits, and prints: "qemu-kvm: -object memory-backend-file,mem-path=/mnt/kvm_hugepage,size=1G,id=mem-mem1: unable to map backing store for hugepages: Cannot allocate memory" Expected results: Qemu should continue and not quit. Additional info:
Igor, shouldn't memory-backend-file fall back to regular RAM the same way -mem-path does? If not, then I think it's a good idea to document the difference in semantics in the manpage.
(In reply to Luiz Capitulino from comment #2) > Igor, shouldn't memory-backend-file fall back to regular RAM the same way > -mem-path does? If not, then I think it's a good idea to document the > difference in semantics in the manpage. I don't think that it ever did nor should, it's configuration error and user should fix it either by switching to ram backend or allocating more hugepages instead of silent fallback and getting performance regressions. I'd do the same for -mem-path, but that would break 'broken' setups out there so it can't be fixed and we have to live with it.