Bug 1417837 - [PPC] Host fails to start a second vm: Requested order 33 HPT, but kernel allocated order 24 (try smaller maxmem?)
Summary: [PPC] Host fails to start a second vm: Requested order 33 HPT, but kernel all...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.0.6.3
Hardware: ppc64le
OS: Unspecified
unspecified
urgent vote
Target Milestone: ---
: ---
Assignee: Michal Skrivanek
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-31 09:00 UTC by Carlos Mestre González
Modified: 2017-01-31 11:28 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-31 11:28:51 UTC
oVirt Team: Virt
rule-engine: ovirt-4.0.z+
rule-engine: blocker+


Attachments (Terms of Use)
vdsm and libvirtd logs (250.85 KB, application/x-gzip)
2017-01-31 09:00 UTC, Carlos Mestre González
no flags Details

Description Carlos Mestre González 2017-01-31 09:00:39 UTC
Created attachment 1246116 [details]
vdsm and libvirtd logs

Description of problem:
Hypervisor fails to start a second vm when one is running (probably a qemu issue but I hope you guys can check first)

Version-Release number of selected component (if applicable):
4.0.7-0.1.el7ev

How reproducible:
100%

Steps to Reproduce:
1. Env with at least one host
2. Create two vms, start them (can be diskless vms booted with PXE)

Actual results:
Second VM fails to start with:

2017-01-31T08:45:46.037377Z qemu-kvm: Requested order 33 HPT, but kernel allocated order 24 (try smaller maxmem?)

Additional info:
Thread-1576::ERROR::2017-01-31 03:45:46,549::vm::773::virt.vm::(_startUnderlyingVm) vmId=`4c03e811-2622-45b1-9438-e8964a3a0699`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 714, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 2026, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3777, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: qemu unexpectedly closed the monitor: 2017-01-31T08:45:46.037377Z qemu-kvm: Requested order 33 HPT, but kernel allocated order 24 (try smaller maxmem?)
Thread-1576::INFO::2017-01-31 03:45:46,552::vm::1330::virt.vm::(setDownStatus) vmId=`4c03e811-2622-45b1-9438-e8964a3a0699`::Changed state to Down: internal error: qemu unexpectedly closed the monitor: 2017-01-31T08:45:46.037377Z qemu-kvm: Requested order 33 HPT, but kernel allocated order 24 (try smaller maxmem?) (code=1)
Thread-1576::INFO::2017-01-31 03:45:46,552::guestagent::430::virt.vm::(stop) vmId=`4c03e811-2622-45b1-9438-e8964a3a0699`::Stopping connection
Thread-1576::DEBUG::2017-01-31 03:45:46,552::vmchannels::238::vds::(unregister) Delete fileno 106 from listener.
Thread-1576::DEBUG::2017-01-31 03:45:46,552::vmchannels::66::vds::(_unregister_fd) Failed to unregister FD from epoll (ENOENT): 106
Thread-1576::DEBUG::2017-01-31 03:45:46,553::__init__::209::jsonrpc.Notification::(emit) Sending event {"params": {"notify_time": 60494666960, "4c03e811-2622-45b1-9438-e8964a3a0699": {"status": "Down", "timeOffset": "0", "exitReason": 1, "exitMessage": "internal error: qemu unexpectedly closed the monitor: 2017-01-31T08:45:46.037377Z qemu-kvm: Requested order 33 HPT, but kernel allocated order 24 (try smaller maxmem?)", "exitCode": 1}}, "jsonrpc": "2.0", "method"::

Comment 1 Carlos Mestre González 2017-01-31 09:01:28 UTC
ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch
qemu-img-rhev-2.6.0-28.el7_3.3.ppc64le
qemu-kvm-rhev-2.6.0-28.el7_3.3.ppc64le
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.4.ppc64le
qemu-kvm-tools-rhev-2.6.0-28.el7_3.3.ppc64le
qemu-kvm-common-rhev-2.6.0-28.el7_3.3.ppc64le
libvirt-2.0.0-10.el7_3.4.ppc64le

Comment 2 Tomas Jelinek 2017-01-31 11:28:51 UTC
The problem is that ovirt sends too high values for max_mem.
The reason for this high values is that you can only hotplug memory until the value of the maxmemory. And, this values are very high and are global for all VMs (until 4.0).

For 4.1 we have enhanced it a lot:
- you can change the max memory from UI per VM
- the default value for the mex memory for the VM is 4 * current memory of the VM.

For 4.0 I would recommend you (if you are not planning to hotplug memory) to set the VMPpc64BitMaxMemorySizeInMB to a value slightly above the memory size of your largest VM.

Closing this as not a bug - if the setting does not work for you, please feel free to reopen.


Note You need to log in before you can comment on or make changes to this bug.