Bug 1523835 - Hosted-Engine: memory hotplug does not work for engine vm
Summary: Hosted-Engine: memory hotplug does not work for engine vm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: unspecified
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Andrej Krejcir
QA Contact: Nikolai Sednev
URL:
Whiteboard:
: 1551257 1685569 (view as bug list)
Depends On: 1478959
Blocks: CEECIR_RHV43_proposed
TreeView+ depends on / blocked
 
Reported: 2017-12-08 20:01 UTC by rhev-integ
Modified: 2023-10-06 17:41 UTC (History)
25 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of: 1478959
Environment:
Last Closed: 2020-08-04 13:16:05 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:16:42 UTC
oVirt gerrit 105533 0 master MERGED core: Fix memory hotplug for hosted engine 2021-02-09 15:23:30 UTC

Description rhev-integ 2017-12-08 20:01:14 UTC
+++ This bug is an upstream to downstream clone. The original bug is: +++
+++   bug 1478959 +++
======================================================================

Created attachment 1310157 [details]
vdsm log

Description of problem:
Increase of HE VM memory raises tracebacks under VDSM log

2017-08-07 17:55:15,246+0300 ERROR (jsonrpc/5) [virt.vm] (vmId='b58fdeda-45bb-43d2-b336-ef9953171347') hotplugMemory failed (vm:2971)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2969, in hotplugMemory
    self._dom.attachDevice(deviceXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 95, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 125, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 586, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 540, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: unsupported configuration: Attaching memory device with size '1966080' would exceed domain's maxMemory config


Version-Release number of selected component (if applicable):
vdsm-4.20.2-25.git7499b81.el7.centos.x86_64
libvirt-client-3.2.0-14.el7_4.2.x86_64
ovirt-engine-4.2.0-0.0.master.20170803140556.git1e7d0dd.el7.centos.noarch

How reproducible:
Always

Steps to Reproduce:
1. Deploy hosted-engine
2. Add master storage domain
3. Wait for auto-import operation
4. Increase amount of HE VM memory via UI

Actual results:
Action succeeds under the engine, but I can see Traceback under vdsm log

Expected results:
Action succeeds under the engine and vdsm log does not have any new tracebacks

Additional info:

(Originally by Artyom Lukianov)

Comment 1 rhev-integ 2017-12-08 20:01:25 UTC
what's the maximum memory value in engine's HE VM dialog?

(Originally by michal.skrivanek)

Comment 3 rhev-integ 2017-12-08 20:01:30 UTC
16Gb, I installed HE VM with 4Gb of memory, so max value is 4 * memory.

(Originally by Artyom Lukianov)

Comment 4 rhev-integ 2017-12-08 20:01:36 UTC
I believe the problem that the vdsm receive command for memory hotplug, when we still do not support it under the HE VM.

(Originally by Artyom Lukianov)

Comment 5 rhev-integ 2017-12-08 20:01:41 UTC
(In reply to Artyom from comment #2)
> 16Gb, I installed HE VM with 4Gb of memory, so max value is 4 * memory.

nope it's started with 4GB as well, hence you're unable to hot plug anything more. Indeed it may not be supported

(Originally by michal.skrivanek)

Comment 6 rhev-integ 2017-12-08 20:01:46 UTC
Memory hotplug for hosted engine is still not supported and there used to be a condition in the engine code that skipped the call for hosted engine. I know Michal was not fond of it, but it should have prevented this error.

Btw: Did the apply now / later dialog show up?

(Originally by Martin Sivak)

Comment 7 rhev-integ 2017-12-08 20:01:54 UTC
No, when I update the memory, it does not show "Apply Later" dialog.

(Originally by Artyom Lukianov)

Comment 8 rhev-integ 2017-12-08 20:02:00 UTC
If memory hotplug for HE VM is not supported, why is the bug targeted for 4.2.0?

(Originally by Yaniv Kaul)

Comment 9 rhev-integ 2017-12-08 20:02:05 UTC
Moving forward since we're done with 4.2.

(Originally by Doron Fediuck)

Comment 11 Marina Kalinin 2017-12-08 20:17:17 UTC
Doron, since HE VM memory hotplug is not yet supported, maybe we should change this to such RFE or close a duplicate of a such?
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/virtual_machine_management_guide/hot_plugging_virtual_memory

Comment 12 Marina Kalinin 2017-12-11 04:25:46 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1304347

Comment 13 Doron Fediuck 2017-12-20 11:24:39 UTC

*** This bug has been marked as a duplicate of bug 1304347 ***

Comment 14 Marina Kalinin 2018-03-01 19:21:31 UTC
It can't be duplicate.
It is a downstream clone.
We need to have those to attach customer tickets to.

Comment 15 Martin Sivák 2018-03-05 12:02:12 UTC
Restoring the flags removed by bugbot.

Comment 16 Marina Kalinin 2018-03-05 20:45:56 UTC
*** Bug 1551257 has been marked as a duplicate of this bug. ***

Comment 17 Martin Sivák 2018-06-26 15:45:55 UTC
We believe this is already working properly now. I am keeping this clone open to let QE test it so we are sure.

Comment 21 Nikolai Sednev 2018-06-27 12:45:35 UTC
Before the hotplug:
2018-06-27 15:32:01,377+0300 INFO  (jsonrpc/2) [api.virt] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': [{'vcpuCount': '4', 'memUsage': '19', 'acpiEnable': 'true', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '10.35.92.4', 'type': 'vnc', 'port': '5900'}], 'guestFQDN': u'nsednev-he-2.qa.lab.tlv.redhat.com', 'vmId': 'e4d53fec-5ead-4154-a489-9a4db112828a', 'session': 'Unknown', 'netIfaces': [{u'inet6': [u'fe80::216:3eff:fe7b:b854', u'2620:52:0:235c:216:3eff:fe7b:b854'], u'hw': u'00:16:3e:7b:b8:54', u'inet': [u'10.35.92.52'], u'name': u'eth0'}], 'timeOffset': '0', 'memoryStats': {'swap_out': '0', 'majflt': '0', 'mem_cached': '864640', 'mem_free': '12571192', 'mem_buffers': '2092', 'swap_in': '0', 'pageflt': '494', 'mem_total': '16265236', 'mem_unused': '12571192'}, 'balloonInfo': {'balloon_max': '16777216', 'balloon_min': '16777216', 'balloon_target': '16777216', 'balloon_cur': '16777216'}, 'pauseCode': 'NOERR', 'disksUsage': [{u'path': u'/', u'total': '53675536384', u'fs': u'xfs', u'used': '4487667712'}], 'network': {'vnet0': {'macAddr': '00:16:3e:7b:b8:54', 'rxDropped': '0', 'tx': '104116304', 'rxErrors': '0', 'txDropped': '0', 'rx': '233233549', 'txErrors': '0', 'state': 'unknown', 'sampleTime': 5438559.97, 'speed': '1000', 'name': 'vnet0'}}, 'vmType': 'kvm', 'guestName': u'nsednev-he-2.qa.lab.tlv.redhat.com', 'elapsedTime': '177003', 'vmJobs': {}, 'cpuSys': '0.67', 'appsList': (u'kernel-3.10.0-862.6.3.el7', u'kernel-3.10.0-862.6.1.el7', u'ovirt-guest-agent-common-1.0.14-3.el7ev', u'cloud-init-0.7.9-24.el7_5.1', u'kernel-3.10.0-862.el7'), 'guestOs': u'3.10.0-862.6.3.el7.x86_64', 'vmName': 'HostedEngine', 'status': 'Up', 'clientIp': '', 'hash': '7065905389596309500', 'guestCPUCount': 4, 'cpuUsage': '2044760000000', 'vcpuPeriod': 100000L, 'guestTimezone': {u'zone': u'America/New_York', u'offset': -300}, 'vcpuQuota': '-1', 'statusTime': '5438559970', 'kvmEnable': 'true', 'disks': {'vda': {'readLatency': '0', 'flushLatency': '468594', 'readRate': '0.0', 'writeRate': '39936.0', 'writtenBytes': '5404049408', 'truesize': '6995591168', 'apparentsize': '62277025792', 'readOps': '29109', 'writeLatency': '3770720', 'imageID': 'b5d79b3d-2efd-4eca-a3c0-eb4313c4a689', 'readBytes': '762182144', 'writeOps': '423743'}, 'hdc': {'readLatency': '0', 'flushLatency': '0', 'readRate': '0.0', 'writeRate': '0.0', 'writtenBytes': '0', 'truesize': '0', 'apparentsize': '0', 'readOps': '4', 'writeLatency': '0', 'readBytes': '152', 'writeOps': '0'}}, 'monitorResponse': '0', 'guestOsInfo': {u'kernel': u'3.10.0-862.6.3.el7.x86_64', u'arch': u'x86_64', u'version': u'7.5', u'distribution': u'Red Hat Enterprise Linux Server', u'type': u'linux', u'codename': u'Maipo'}, 'username': u'root', 'cpuUser': '5.18', 'lastLogin': 1530102112.698157, 'guestIPs': u'10.35.92.52', 'guestContainers': []}]} from=::1,49486, vmId=e4d53fec-5ead-4154-a489-9a4db112828a (api:52)
2018-06-27 15:32:01,378+0300 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.getStats succeeded in 0.00 seconds (__init__:573)


After hotplug on host:
2018-06-27 15:32:16,981+0300 INFO  (jsonrpc/4) [api.virt] START hotplugMemory(params={u'memory': {u'node': 0, u'specPa
rams': {u'node': u'0', u'size': u'128'}, u'deviceId': u'6790bc27-a291-4c9e-8b96-ac84a30d1393', u'device': u'memory', u
'type': u'memory', u'size': 128}, u'vmId': u'e4d53fec-5ead-4154-a489-9a4db112828a', u'memGuaranteedSize': 18432}) from
=::ffff:10.35.92.52,48844, flow_id=1839a665, vmId=e4d53fec-5ead-4154-a489-9a4db112828a (api:46)
2018-06-27 15:32:16,992+0300 INFO  (jsonrpc/0) [vdsm.api] START getStoragePoolInfo(spUUID=u'5b1e79f2-00ce-02af-0005-00
00000001eb', options=None) from=::ffff:10.35.92.52,48860, task_id=9d6dc625-ed35-4563-836b-7eb155deb273 (api:46)
2018-06-27 15:32:17,046+0300 INFO  (jsonrpc/0) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Descr
iption', 'isoprefix': '', 'pool_status': 'connected', 'lver': 6L, 'domains': u'9722aafe-5247-4781-9585-a880483fbd98:Active,4fb972d3-5659-4a0f-a564-65e83fc81f72:Active', 'master_uuid': u'9722aafe-5247-4781-9585-a880483fbd98', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 1}, 'dominfo': {u'9722aafe-5247-4781-9585-a880483fbd98': {'status': u'Active', 'diskfree': '1454038319104', 'isoprefix': '', 'alerts': [], 'disktotal': '4396973817856', 'version': 4}, u'4fb972d3-5659-4a0f-a564-65e83fc81f72': {'status': u'Active', 'diskfree': '1454038319104', 'isoprefix': '', 'alerts': [], 'disktotal': '4396973817856', 'version': 4}}} from=::ffff:10.35.92.52,48860, task_id=9d6dc625-ed35-4563-836b-7eb155deb273 (api:52)
2018-06-27 15:32:17,047+0300 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.06 seconds (__init__:573)
2018-06-27 15:32:17,095+0300 INFO  (jsonrpc/4) [api.virt] FINISH hotplugMemory return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'Up', 'maxMemSize': 65536, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.3.0', 'tabletEnable': 'true', 'vmId': 'e4d53fec-5ead-4154-a489-9a4db112828a', 'memGuaranteedSize': 18432, 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'SandyBridge', 'guestDiskMapping': {u'b5d79b3d-2efd-4eca-a': {u'name': u'/dev/vda'}, u'QEMU_DVD-ROM_QM00003': {u'name': u'/dev/sr0'}}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}], u'xml': u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"><name>HostedEngine</name><uuid>e4d53fec-5ead-4154-a489-9a4db112828a</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><maxMemory slots="16">67108864</maxMemory><vcpu current="4">16</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">oVirt</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">e4d53fec-5ead-4154-a489-9a4db112828a</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer name="hpet" present="no"/></clock><features><acpi/><vmcoreinfo/></features><cpu match="exact"><model>SandyBridge</model><topology cores="1" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3" memory="16777216"/></numa></cpu><cputune/><devices><input type="tablet" bus="usb"/><channel type="unix"><target type="virtio" name="ovirt-guest-agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/e4d53fec-5ead-4154-a489-9a4db112828a.ovirt-guest-agent.0"/></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/e4d53fec-5ead-4154-a489-9a4db112828a.org.qemu.guest_agent.0"/></channel><controller type="ide"><address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/></controller><rng model="virtio"><backend model="random">/dev/random</backend></rng><console type="pty"><target type="virtio" port="0"/><alias name="ua-2f03b73b-0db3-4827-ab"/></console><controller type="virtio-serial" index="0" ports="16"><address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/></controller><controller type="usb"><address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/></controller><controller type="scsi"><address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/></controller><video><model type="vga" vram="32768" heads="1"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/></video><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"/></graphics><memballoon model="none"/><interface type="bridge"><model type="virtio"/><link state="up"/><source bridge="ovirtmgmt"/><alias name="ua-911b08c5-2e7d-4b09-9fcb-1f722064ce83"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/><mac address="00:16:3e:7b:b8:54"/><filterref filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"/><source file="" startupPolicy="optional"/><target dev="hdc" bus="ide"/><readonly/><address bus="1" controller="0" unit="0" type="drive" target="0"/></disk><disk snapshot="no" type="file" device="disk"><target dev="vda" bus="virtio"/><source file="/rhev/data-center/00000000-0000-0000-0000-000000000000/4fb972d3-5659-4a0f-a564-65e83fc81f72/images/b5d79b3d-2efd-4eca-a3c0-eb4313c4a689/c8c5aac3-9553-4d4f-b
933-9f7471437b05"/><driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/><alias name="ua-b5d79b3d-2efd-4eca-a3c0-eb4313c4a689"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/><serial>b5d79b3d-2efd-4eca-a3c0-eb4313c4a689</serial></disk><lease><key>c8c5aac3-9553-4d4f-b933-9f7471437b05</key><lockspace>4fb972d3-5659-4a0f-a564-65e83fc81f72</lockspace><target offset="LEASE-OFFSET:c8c5aac3-9553-4d4f-b933-9f7471437b05:4fb972d3-5659-4a0f-a564-65e83fc81f72" path="LEASE-PATH:c8c5aac3-9553-4d4f-b933-9f7471437b05:4fb972d3-5659-4a0f-a564-65e83fc81f72"/></lease></devices><pm><suspend-to-disk enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type><smbios mode="sysinfo"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><minGuaranteedMemoryMb type="int">16384</minGuaranteedMemoryMb><clusterVersion>4.1</clusterVersion><ovirt-vm:custom/><ovirt-vm:device mac_address="00:16:3e:7b:b8:54"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device devtype="disk" name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>c8c5aac3-9553-4d4f-b933-9f7471437b05</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>b5d79b3d-2efd-4eca-a3c0-eb4313c4a689</ovirt-vm:imageID><ovirt-vm:domainID>4fb972d3-5659-4a0f-a564-65e83fc81f72</ovirt-vm:domainID></ovirt-vm:device><launchPaused>false</launchPaused></ovirt-vm:vm></metadata></domain>', 'smpCoresPerSocket': '1', 'kvmEnable': 'true', 'bootMenuEnable': 'false', 'devices': [{'index': 2, 'iface': 'ide', 'name': 'hdc', 'vm_custom': {}, 'format': 'raw', 'vmid': 'e4d53fec-5ead-4154-a489-9a4db112828a', 'diskType': 'file', 'specParams': {}, 'readonly': 'True', 'alias': 'ide0-1-0', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'discard': False, 'path': '', 'propagateErrors': 'report', 'type': 'disk'}, {'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'reqsize': '0', 'serial': 'b5d79b3d-2efd-4eca-a3c0-eb4313c4a689', 'index': 0, 'iface': 'virtio', 'apparentsize': '62277025792', 'specParams': {}, 'cache': 'none', 'imageID': 'b5d79b3d-2efd-4eca-a3c0-eb4313c4a689', 'readonly': 'False', 'shared': 'exclusive', 'truesize': '6967361536', 'type': 'disk', 'domainID': '4fb972d3-5659-4a0f-a564-65e83fc81f72', 'volumeInfo': {'path': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_nsednev__he__2/4fb972d3-5659-4a0f-a564-65e83fc81f72/images/b5d79b3d-2efd-4eca-a3c0-eb4313c4a689/c8c5aac3-9553-4d4f-b933-9f7471437b05', 'type': 'file'}, 'format': 'raw', 'poolID': '00000000-0000-0000-0000-000000000000', 'device': 'disk', 'path': u'/var/run/vdsm/storage/4fb972d3-5659-4a0f-a564-65e83fc81f72/b5d79b3d-2efd-4eca-a3c0-eb4313c4a689/c8c5aac3-9553-4d4f-b933-9f7471437b05', 'propagateErrors': 'off', 'name': 'vda', 'vm_custom': {}, 'vmid': 'e4d53fec-5ead-4154-a489-9a4db112828a', 'volumeID': 'c8c5aac3-9553-4d4f-b933-9f7471437b05', 'diskType': 'file', 'alias': 'ua-b5d79b3d-2efd-4eca-a3c0-eb4313c4a689', 'discard': False, 'volumeChain': [{'domainID': '4fb972d3-5659-4a0f-a564-65e83fc81f72', 'leaseOffset': 0, 'volumeID': u'c8c5aac3-9553-4d4f-b933-9f7471437b05', 'leasePath': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_nsednev__he__2/4fb972d3-5659-4a0f-a564-65e83fc81f72/images/b5d79b3d-2efd-4eca-a3c0-eb4313c4a689/c8c5aac3-9553-4d4f-b933-9f7471437b05.lease', 'imageID': 'b5d79b3d-2efd-4eca-a3c0-eb4313c4a689', 'path': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_nsednev__he__2/4fb972d3-5659-4a0f-a564-65e83fc81f72/images/b5d79b3d-2efd-4eca-a3c0-eb4313c4a689/c8c5aac3-9553-4d4f-b933-9f7471437b05'}]}, {'device': 'console', 'alias': 'ua-2f03b73b-0db3-4827-ab', 'type': 'console', 'specParams': {'consoleType': 'virtio', 'enableSocket': False}}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon'}, {'device': 'vnc', 'specParams': {'fileTransferEnable': True, 'copyPasteEnable': True, 'keyMap': 'en-us', 'displayIp': '10.35.92.4', 'displayNetwork': 'ovirtmgmt'}, 'port': '-1', 'type': 'graphics'}, {'device': 'virtio', 'specParams': {'source': 'random'}, 'model': 'virtio', 'type': 'rng'}, {'device': 'ide', 'type': 'controller', 'address': {'function': '0x1', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x01'}}, {'device': 'virtio-serial', 'index': '0', 'type': 'controller', 'ports': '16', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x05'}}, {'device': 'usb', 'type': 'controller', 'address': {'function': '0x2', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x01'}}, {'device': 'scsi', 'type': 'controller', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x04'}}, {u'node': 0, u'specParams': {u'node': u'0', u'size': u'128'}, u'deviceId': u'6790bc27-a291-4c9e-8b96-ac84a30d1393', u'device': u'memory', u'type': u'memory', u'size': 128}, {'nicModel': 'pv', 'macAddr': '00:16:3e:7b:b8:54', 'linkActive': True, 'filterParameters': [], 'specParams': {}, 'filter': 'vdsm-no-mac-spoofing', 'alias': 'ua-911b08c5-2e7d-4b09-9fcb-1f722064ce83', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x03'}, 'device': 'bridge', 'type': 'interface', 'network': 'ovirtmgmt'}, {'device': 'vga', 'specParams': {'vram': '32768', 'heads': '1'}, 'type': 'video', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x02'}}, {'lease_id': 'c8c5aac3-9553-4d4f-b933-9f7471437b05', 'sd_id': '4fb972d3-5659-4a0f-a564-65e83fc81f72', 'offset': 'LEASE-OFFSET:c8c5aac3-9553-4d4f-b933-9f7471437b05:4fb972d3-5659-4a0f-a564-65e83fc81f72', 'device': 'lease', 'path': 'LEASE-PATH:c8c5aac3-9553-4d4f-b933-9f7471437b05:4fb972d3-5659-4a0f-a564-65e83fc81f72', 'type': 'lease'}], 'custom': {}, 'maxVCpus': '16', 'clientIp': '', 'statusTime': '5438575680', 'vmName': 'HostedEngine', 'maxMemSlots': 16}} from=::ffff:10.35.92.52,48844, flow_id=1839a665, vmId=e4d53fec-5ead-4154-a489-9a4db112828a (api:52)
2018-06-27 15:32:17,099+0300 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call VM.hotplugMemory succeeded in 0.12 seconds (__init__:573)

Due to "VM.hotplugMemory succeeded in 0.12 seconds" was properly received and I've successfully changed memory from 16384MB to 18432MB, moving to verified.

Tested on these components on hosts:
ovirt-hosted-engine-ha-2.2.13-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.22-1.el7ev.noarch
Linux 3.10.0-862.3.2.el7.x86_64 #1 SMP Tue May 15 18:22:15 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 7.5

On engine:
ovirt-engine-setup-4.2.4.5-0.1.el7_3.noarch
Linux 3.10.0-862.6.3.el7.x86_64 #1 SMP Fri Jun 15 17:57:37 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Comment 22 RHV bug bot 2018-12-10 15:12:48 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 23 RHV bug bot 2019-01-15 23:35:18 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 24 Olimp Bockowski 2019-03-22 15:49:11 UTC
@Nikolai - is it for sure fixed properly or we have some new bug? I have the latest (rhvm-4.2.8.5-0.1.el7ev.noarch) and changing Self-Hosted Engine memory changes Memory Guaranteed (quite the opposite like for a normal VM). 
You wrote:
"Due to "VM.hotplugMemory succeeded in 0.12 seconds" was properly received and I've successfully changed memory from 16384MB to 18432MB, moving to verified."

but we had:
018-06-27 15:32:16,981+0300 INFO  (jsonrpc/4) [api.virt] START hotplugMemory(params={u'memory': {u'node': 0, u'specPa
rams': {u'node': u'0', u'size': u'128'}, u'deviceId': u'6790bc27-a291-4c9e-8b96-ac84a30d1393', u'device': u'memory', u
'type': u'memory', u'size': 128}, u'vmId': u'e4d53fec-5ead-4154-a489-9a4db112828a', u'memGuaranteedSize': 18432}) from
=::ffff:10.35.92.52,48844, flow_id=1839a665, vmId=e4d53fec-5ead-4154-a489-9a4db112828a (api:46)

so 'memGuaranteedSize': 18432
and further:

2018-06-27 15:32:17,095+0300 INFO  (jsonrpc/4) [api.virt] FINISH hotplugMemory return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'Up', 'maxMemSize': 65536, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.3.0', 'tabletEnable': 'true', 'vmId': 'e4d53fec-5ead-4154-a489-9a4db112828a', 'memGuaranteedSize': 18432, 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'SandyBridge', 'guestDiskMapping': {u'b5d79b3d-2efd-4eca-a': {u'name': u'/dev/vda'}, u'QEMU_DVD-ROM_QM00003': {u'name': u'/dev/sr0'}}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}]

so memory is not changed, just guaranteed. Did you check in Admin Portal and inside VM? I believe we have a bug in changing SHE hotplugMemory logic, I will work on it further on Monday, have to dig into it more.

Comment 25 Nikolai Sednev 2019-03-25 10:31:55 UTC
Forth to https://bugzilla.redhat.com/show_bug.cgi?id=1523835#c21 it was fixed, if its not working well now, please open a new bug as a regression.
I checked on downstream components as described above.
Please check your setup environment component's versions and open separate bug if required.

Comment 26 Olimp Bockowski 2019-03-25 12:46:38 UTC
@Nikolai - the point is that IMHO it shouldn't go into verified because the test was failed. 
From your log output:

2018-06-27 15:32:17,095+0300 INFO  (jsonrpc/4) [api.virt] FINISH hotplugMemory return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'Up', 'maxMemSize': 65536, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.3.0', 'tabletEnable': 'true', 'vmId': 'e4d53fec-5ead-4154-a489-9a4db112828a', 'memGuaranteedSize': 18432, 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'SandyBridge', 'guestDiskMapping': {u'b5d79b3d-2efd-4eca-a': {u'name': u'/dev/vda'}, u'QEMU_DVD-ROM_QM00003': {u'name': u'/dev/sr0'}}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}]

so we see memory is still 16384, maxMemSize is 65536 and memGuaranteedSize 18432

Comment 27 Nikolai Sednev 2019-03-25 12:50:59 UTC
(In reply to Olimp Bockowski from comment #26)
> @Nikolai - the point is that IMHO it shouldn't go into verified because the
> test was failed. 
> From your log output:
> 
> 2018-06-27 15:32:17,095+0300 INFO  (jsonrpc/4) [api.virt] FINISH
> hotplugMemory return={'status': {'message': 'Done', 'code': 0}, 'vmList':
> {'status': 'Up', 'maxMemSize': 65536, 'acpiEnable': 'true',
> 'emulatedMachine': 'pc-i440fx-rhel7.3.0', 'tabletEnable': 'true', 'vmId':
> 'e4d53fec-5ead-4154-a489-9a4db112828a', 'memGuaranteedSize': 18432,
> 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'SandyBridge',
> 'guestDiskMapping': {u'b5d79b3d-2efd-4eca-a': {u'name': u'/dev/vda'},
> u'QEMU_DVD-ROM_QM00003': {u'name': u'/dev/sr0'}}, 'arch': 'x86_64', 'smp':
> '4', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory':
> '16384'}]
> 
> so we see memory is still 16384, maxMemSize is 65536 and memGuaranteedSize
> 18432

When I tested it, as I already wrote above, I successfully increased RAM memory, checked that engine VM received additional RAM and hence moved to verified. 
Again, if you see that now its broken, please open a new bug as a regression.

Comment 28 Nikolai Sednev 2019-03-26 13:30:08 UTC
Checked now on these components and memory hotplug isn't working properly:

I've changed RAM size from UI from 16384 MB to 18432 MB and clicked OK button, engine returned that change was success, but  "Memory Size" field remained 16384 MB after the change and "Physical Memory Guaranteed" had changed to 18432 MB. In UI events I see these:
VM HostedEngine configuration was updated by admin@internal-authz.
3/26/193:14:46 PM

Hotset memory: changed the amount of memory on VM HostedEngine from 16384 to 16384
3/26/193:14:46 PM

Hotset memory: changed the amount of memory on VM HostedEngine from 16384 to 16384
3/26/193:14:08 PM

User admin@internal-authz connecting from '10.35.7.165' using session 'RVGHV1O5YVzF+FDvdOis6+5DAMHqpCvkzuyFzybfq6OooD6vzO4zbG3HavqDT5S9AsYTchvik2g+ll93I3Eojg==' logged in.
3/26/192:52:22 PM

In vdsm log I see these:

2019-03-26 15:14:46,277+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (
__init__:312)
2019-03-26 15:14:46,281+0200 INFO  (jsonrpc/6) [api.host] START getCapabilities() from=::1,49084 (api:48)
2019-03-26 15:14:46,292+0200 INFO  (jsonrpc/2) [api.virt] FINISH hotplugMemory return={'status': {'message': 'Done', '
code': 0}, 'vmList': {'status': 'Up', 'maxMemSize': 65536, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.6
.0', 'numOfIoThreads': '1', 'vmId': 'f8498652-628d-4e9e-9707-51f5e88dba8c', 'memGuaranteedSize': 18432, 'timeOffset': 
'0', 'smpThreadsPerCore': '1', 'cpuType': 'SandyBridge', 'guestDiskMapping': {u'b5df94ab-2502-4a95-b': {u'name': u'/de
v/vda'}, u'QEMU_DVD-ROM_QM00003': {u'name': u'/dev/sr0'}}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': [{'nodeInde
x': 0, 'cpus': '0,1,2,3', 'memory': '16384'}], u'xml': u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain xmlns:ovi
rt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"><name>HostedEngine</name><u
uid>f8498652-628d-4e9e-9707-51f5e88dba8c</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothre
ads>1</iothreads><maxMemory slots="16">67108864</maxMemory><vcpu current="4">64</vcpu><sysinfo type="smbios"><system><
entry name="manufacturer">Red Hat</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entr
y><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">f8498652-628d-4e9e-9707-51f5e88dba8c</entry></system></s
ysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"/><timer name="pit" tickpolicy="d
elay"/><timer name="hpet" present="no"/></clock><features><acpi/><vmcoreinfo/></features><cpu match="exact"><model>San
dyBridge</model><feature name="pcid" policy="require"/><feature name="spec-ctrl" policy="require"/><feature name="ssbd
" policy="require"/><topology cores="4" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3" memory="16777216"/
></numa></cpu><cputune/><devices><input type="mouse" bus="ps2"/><channel type="unix"><target type="virtio" name="ovirt
-guest-agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/f8498652-628d-4e9e-9707-51f5e88dba8c.ovirt-g
uest-agent.0"/></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source mode="bind"
 path="/var/lib/libvirt/qemu/channels/f8498652-628d-4e9e-9707-51f5e88dba8c.org.qemu.guest_agent.0"/></channel><graphic
s type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="
network" network="vdsm-ovirtmgmt"/></graphics><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="
1"/><alias name="ua-1bd3e1e5-292d-429d-8d51-b04ccbf51189"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x
05" type="pci"/></controller><controller type="virtio-serial" index="0" ports="16"><alias name="ua-2b3a8145-d173-4154-
86c9-5e82ae0cbc6e"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/></controller><rng model
="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-2c6e557b-b89d-41c2-beb5-bacbd568f69b"/></rng><
memballoon model="virtio"><stats period="5"/><alias name="ua-5adc88a1-3e1f-4d25-87b2-900178404f94"/><address bus="0x00
" domain="0x0000" function="0x0" slot="0x08" type="pci"/></memballoon><controller type="usb" model="piix3-uhci" index=
"0"><address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/></controller><video><model type="qxl" v
ram="32768" heads="1" ram="65536" vgamem="16384"/><alias name="ua-7851ae82-9c12-4428-838a-81b59c17179c"/><address bus=
"0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/></video><graphics type="spice" port="-1" autoport="yes" 
passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"/><channel name="inp
uts" mode="secure"/><channel name="cursor" mode="secure"/><channel name="playback" mode="secure"/><channel name="recor
d" mode="secure"/><channel name="display" mode="secure"/><channel name="smartcard" mode="secure"/><channel name="usbre
dir" mode="secure"/><listen type="network" network="vdsm-ovirtmgmt"/></graphics><console type="unix"><source path="/va
r/run/ovirt-vmconsole-console/f8498652-628d-4e9e-9707-51f5e88dba8c.sock" mode="bind"/><target type="serial" port="0"/>
<alias name="ua-ce200b9b-afd5-44d0-b3e8-c391413909d2"/></console><sound model="ich6"><alias name="ua-dedd1ce8-4a46-43d
0-a606-7c7152066d34"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/></sound><controller t
ype="ide" index="0"><address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/></controller><serial ty
pe="unix"><source path="/var/run/ovirt-vmconsole-console/f8498652-628d-4e9e-9707-51f5e88dba8c.sock" mode="bind"/><targ
et port="0"/></serial><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"/></channel><interface t
ype="bridge"><model type="virtio"/><link state="up"/><source bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><ali
as name="ua-0d0938c3-5666-4508-baa5-9d49c28e2cdc"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type
="pci"/><mac address="00:16:3e:7b:b8:53"/><mtu size="1500"/><filterref filter="vdsm-no-mac-spoofing"/><bandwidth/></in
terface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"/><source f
ile="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"/></source><target dev="hdc" bus="ide"/>
<readonly/><alias name="ua-97681479-9dff-4a7a-9031-74dc4ea8a0b2"/><address bus="1" controller="0" unit="0" type="drive
" target="0"/></disk><disk snapshot="no" type="file" device="disk"><target dev="vda" bus="virtio"/><source file="/rhev
/data-center/00000000-0000-0000-0000-000000000000/ebcb128c-284d-4a0c-9e42-fb83170cbc4a/images/b5df94ab-2502-4a95-b5a2-
c4acd6b09d0b/15d1476d-b8d4-4cad-8b9b-cb615bc993d7"><seclabel model="dac" type="none" relabel="no"/></source><driver na
me="qemu" iothread="1" io="threads" type="raw" error_policy="stop" cache="none"/><alias name="ua-b5df94ab-2502-4a95-b5
a2-c4acd6b09d0b"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x07" type="pci"/><serial>b5df94ab-2502-4a9
5-b5a2-c4acd6b09d0b</serial></disk><lease><key>15d1476d-b8d4-4cad-8b9b-cb615bc993d7</key><lockspace>ebcb128c-284d-4a0c
-9e42-fb83170cbc4a</lockspace><target offset="LEASE-OFFSET:15d1476d-b8d4-4cad-8b9b-cb615bc993d7:ebcb128c-284d-4a0c-9e4
2-fb83170cbc4a" path="LEASE-PATH:15d1476d-b8d4-4cad-8b9b-cb615bc993d7:ebcb128c-284d-4a0c-9e42-fb83170cbc4a"/></lease><
/devices><pm><suspend-to-disk enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" machine="pc-i44
0fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"/><bios useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><
ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:
clusterVersion><ovirt-vm:custom/><ovirt-vm:device mac_address="00:16:3e:7b:b8:53"><ovirt-vm:custom/></ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovi
rt-vm:volumeID>15d1476d-b8d4-4cad-8b9b-cb615bc993d7</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ov
irt-vm:imageID>b5df94ab-2502-4a95-b5a2-c4acd6b09d0b</ovirt-vm:imageID><ovirt-vm:domainID>ebcb128c-284d-4a0c-9e42-fb831
70cbc4a</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehav
ior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>', 'smpCoresPerSocket': '4', 'kvmEnable': '
true', 'bootMenuEnable': 'false', 'devices': [{'index': 2, 'iface': 'ide', 'name': 'hdc', 'vm_custom': {}, 'format': '
raw', 'vmid': 'f8498652-628d-4e9e-9707-51f5e88dba8c', 'diskType': 'file', 'alias': 'ua-97681479-9dff-4a7a-9031-74dc4ea
8a0b2', 'readonly': 'True', 'specParams': {}, 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0
', 'unit': '0'}, 'device': 'cdrom', 'discard': False, 'path': '', 'propagateErrors': 'report', 'type': 'disk'}, {'addr
ess': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'reqsize': '0', 'serial':
 'b5df94ab-2502-4a95-b5a2-c4acd6b09d0b', 'index': 0, 'iface': 'virtio', 'apparentsize': '62277025792', 'specParams': {
'pinToIoThread': '1'}, 'cache': 'none', 'imageID': 'b5df94ab-2502-4a95-b5a2-c4acd6b09d0b', 'readonly': 'False', 'share
d': 'exclusive', 'truesize': '4374106112', 'type': 'disk', 'domainID': 'ebcb128c-284d-4a0c-9e42-fb83170cbc4a', 'volume
Info': {'path': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_nsednev__he__1/ebcb128c-284d-4
a0c-9e42-fb83170cbc4a/images/b5df94ab-2502-4a95-b5a2-c4acd6b09d0b/15d1476d-b8d4-4cad-8b9b-cb615bc993d7', 'type': 'file
'}, 'format': 'raw', 'poolID': '00000000-0000-0000-0000-000000000000', 'device': 'disk', 'path': u'/var/run/vdsm/stora
ge/ebcb128c-284d-4a0c-9e42-fb83170cbc4a/b5df94ab-2502-4a95-b5a2-c4acd6b09d0b/15d1476d-b8d4-4cad-8b9b-cb615bc993d7', 'p
ropagateErrors': 'off', 'name': 'vda', 'vm_custom': {}, 'vmid': 'f8498652-628d-4e9e-9707-51f5e88dba8c', 'volumeID': '1
5d1476d-b8d4-4cad-8b9b-cb615bc993d7', 'diskType': 'file', 'alias': 'ua-b5df94ab-2502-4a95-b5a2-c4acd6b09d0b', 'discard
': False, 'volumeChain': [{'domainID': 'ebcb128c-284d-4a0c-9e42-fb83170cbc4a', 'leaseOffset': 0, 'volumeID': u'15d1476
d-b8d4-4cad-8b9b-cb615bc993d7', 'leasePath': u'/rhev/data-center/mnt/yellow-vdsb.qa.lab.tlv.redhat.com:_Compute__NFS_n
sednev__he__1/ebcb128c-284d-4a0c-9e42-fb83170cbc4a/images/b5df94ab-2502-4a95-b5a2-c4acd6b09d0b/15d1476d-b8d4-4cad-8b9b
-cb615bc993d7.lease', 'imageID': 'b5df94ab-2502-4a95-b5a2-c4acd6b09d0b', 'path': u'/rhev/data-center/mnt/yellow-vdsb.q
a.lab.tlv.redhat.com:_Compute__NFS_nsednev__he__1/ebcb128c-284d-4a0c-9e42-fb83170cbc4a/images/b5df94ab-2502-4a95-b5a2-
c4acd6b09d0b/15d1476d-b8d4-4cad-8b9b-cb615bc993d7'}]}, {'device': 'ich6', 'alias': 'ua-dedd1ce8-4a46-43d0-a606-7c71520
66d34', 'type': 'sound', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x0
4'}}, {'device': 'console', 'alias': 'ua-ce200b9b-afd5-44d0-b3e8-c391413909d2', 'type': 'console', 'specParams': {'con
soleType': 'serial', 'enableSocket': True}}, {'device': 'memballoon', 'alias': 'ua-5adc88a1-3e1f-4d25-87b2-900178404f9
4', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x08'}, 'type': 'balloon
', 'specParams': {'model': 'virtio'}}, {'device': 'vnc', 'specParams': {'fileTransferEnable': True, 'copyPasteEnable':
 True, 'keyMap': 'en-us', 'displayIp': '10.35.92.4', 'displayNetwork': 'ovirtmgmt'}, 'port': '-1', 'type': 'graphics'}
, {'tlsPort': '-1', 'specParams': {'fileTransferEnable': True, 'copyPasteEnable': True, 'displayIp': '10.35.92.4', 'di
splayNetwork': 'ovirtmgmt'}, 'device': 'spice', 'type': 'graphics', 'port': '-1'}, {'specParams': {'source': 'urandom'
}, 'alias': 'ua-2c6e557b-b89d-41c2-beb5-bacbd568f69b', 'device': 'virtio', 'model': 'virtio', 'type': 'rng'}, {'index'
: '0', 'specParams': {'ioThreadId': '1'}, 'alias': 'ua-1bd3e1e5-292d-429d-8d51-b04ccbf51189', 'address': {'function': 
'0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x05'}, 'device': 'scsi', 'model': 'virtio-scsi', 't
ype': 'controller'}, {'index': '0', 'alias': 'ua-2b3a8145-d173-4154-86c9-5e82ae0cbc6e', 'address': {'function': '0x0',
 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x06'}, 'device': 'virtio-serial', 'type': 'controller', '
ports': '16'}, {'device': 'usb', 'index': '0', 'model': 'piix3-uhci', 'type': 'controller', 'address': {'function': '0
x2', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x01'}}, {'device': 'ide', 'index': '0', 'type': 'cont
roller', 'address': {'function': '0x1', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x01'}}, {u'node': 
0, u'alias': u'ua-9b27862a-552c-4555-b220-e11bd3813579', u'specParams': {u'node': u'0', u'size': u'128'}, u'deviceId':
 u'9b27862a-552c-4555-b220-e11bd3813579', u'device': u'memory', u'type': u'memory', u'size': 128}, {u'node': 0, u'alia
s': u'ua-47e14252-539e-4e66-8137-756c3fbf2c1e', u'specParams': {u'node': u'0', u'size': u'1920'}, u'deviceId': u'47e14
252-539e-4e66-8137-756c3fbf2c1e', u'device': u'memory', u'type': u'memory', u'size': 1920}, {u'node': 0, u'alias': u'u
a-d42e42c5-218f-4fca-bb52-922ba1853583', u'specParams': {u'node': u'0', u'size': u'2048'}, u'deviceId': u'd42e42c5-218
f-4fca-bb52-922ba1853583', u'device': u'memory', u'type': u'memory', u'size': 2048}, {'nicModel': 'pv', 'macAddr': '00
:16:3e:7b:b8:53', 'linkActive': True, 'filterParameters': [], 'specParams': {}, 'custom': {'queues': '4'}, 'filter': '
vdsm-no-mac-spoofing', 'alias': 'ua-0d0938c3-5666-4508-baa5-9d49c28e2cdc', 'address': {'function': '0x0', 'bus': '0x00
', 'domain': '0x0000', 'type': 'pci', 'slot': '0x03'}, 'device': 'bridge', 'mtu': 1500, 'type': 'interface', 'network'
: 'ovirtmgmt'}, {'device': 'qxl', 'alias': 'ua-7851ae82-9c12-4428-838a-81b59c17179c', 'specParams': {'vram': '32768', 
'vgamem': '16384', 'heads': '1', 'ram': '65536'}, 'type': 'video', 'address': {'function': '0x0', 'bus': '0x00', 'doma
in': '0x0000', 'type': 'pci', 'slot': '0x02'}}, {'lease_id': '15d1476d-b8d4-4cad-8b9b-cb615bc993d7', 'sd_id': 'ebcb128
c-284d-4a0c-9e42-fb83170cbc4a', 'offset': 'LEASE-OFFSET:15d1476d-b8d4-4cad-8b9b-cb615bc993d7:ebcb128c-284d-4a0c-9e42-f
b83170cbc4a', 'device': 'lease', 'path': 'LEASE-PATH:15d1476d-b8d4-4cad-8b9b-cb615bc993d7:ebcb128c-284d-4a0c-9e42-fb83
170cbc4a', 'type': 'lease'}], 'custom': {}, 'maxVCpus': '64', 'statusTime': '4900469070', 'vmName': 'HostedEngine', 'm
axMemSlots': 16}} from=::ffff:10.35.92.51,48348, flow_id=10a11c69, vmId=f8498652-628d-4e9e-9707-51f5e88dba8c (api:54)
2019-03-26 15:14:46,300+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.hotplugMemory succeeded in 0.08 sec
onds (__init__:312)

Moving back to assigned.

Comment 29 Nikolai Sednev 2019-03-26 13:46:38 UTC
ovirt-hosted-engine-setup-2.3.6-1.el7ev.noarch
ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch
Red Hat Enterprise Linux Server release 7.6 (Maipo)
Linux 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Comment 30 Ryan Barry 2019-03-28 14:23:46 UTC
Nikolai/Olimp, can you please attach new logs?

We also have https://bugzilla.redhat.com/show_bug.cgi?id=1685569, where it appears that a balloon driver is not available on hosted engine, so hotplugging fails, and this may be the same

Comment 31 Olimp Bockowski 2019-03-31 10:44:02 UTC
@Ryan - I am not sure it is the same, here it looks like the wrong property is changed, e.g. memGuaranteedSize instead of memory one.

customer's
rhvm-4.2.8.2-0.1.el7ev.noarch
redhat-release-virtualization-host-4.2-8.0.el7.x86_64

IMHO the trap here is that we can have easily positive QA test, e.g. the error message could not be triggered in some conditions:

libvirtError: unsupported configuration: Attaching memory device with size '131072' would exceed domain's maxMemory config

so let's say you have 16GiB, and max is usually 32 GiB, you increase to 17GiB and guaranted is set as 17GiB while mem is still with the old value but.. you won't see the error message, because max won't be overruned.

@Nikolai I believe you have because you pasted the output? I have one file from a customer, so I am attaching, the test env I used is gone, so can't upload that file.

Comment 33 Olimp Bockowski 2019-03-31 11:24:33 UTC
@Ryan - I read the whole https://bugzilla.redhat.com/show_bug.cgi?id=1685569
I believe there are some red herrings. 
"Physical Memory Guaranteed" is the lowest value that memory ballooning will reach. This is done by the Memory Overcommit Manager and the ballooning driver. If ballooning is disabled, this value is not used at all."
so increasing guaranteed could hit some other problem, ANYWAY, it is possible to change the mem size, but manually, e.g. hoested-engine --vm-start --conf --vm-conf=./vm.conf (modified) and in the other BZ they claimed it is not possible (I guess because they were looking into UI not inside VM). I changed for my customer, but in UI it is not written = it wasn't written to the config file on SHE Storage Domain, so after reboot we have the old value (of course I change OvfUpdateIntervalInMinutes for 1 minute to make it persistent)

Comment 35 Daniel Gur 2019-08-28 13:14:18 UTC
sync2jira

Comment 36 Daniel Gur 2019-08-28 13:19:20 UTC
sync2jira

Comment 37 Dionysis Kladis 2019-10-03 13:29:33 UTC
i have the same problem as Nikolai Sednev  i cannot increase the memory of hosted engine VM.

Here is the log from the engine i can provide and willing to do anything to assist you to crack this bug!




2019-10-03 15:04:28,951+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] EVENT_ID: VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host Mars was guaranteed 8192 MB but currently has 5120 MB
2019-10-03 15:19:30,294+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [] EVENT_ID: VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host Mars was guaranteed 8192 MB but currently has 5120 MB
    <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>1286144</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
      <cell id='0' cpus='0' memory='1048576' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>8388608</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
      <cell id='0' cpus='0' memory='2097152' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
      <cell id='0' cpus='0' memory='4194304' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>8388608</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
      <cell id='0' cpus='0' memory='2097152' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>1286144</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
      <cell id='0' cpus='0' memory='1048576' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
      <cell id='0' cpus='0' memory='4194304' unit='KiB'/>
2019-10-03 15:29:57,829+03 INFO  [org.ovirt.engine.core.bll.HotSetAmountOfMemor Command] (default task-6) [1796392d] Running command: HotSetAmountOfMemoryCommand internal: true. Entities affected :  ID: f058c188-43c5-4685-88e0-c88b3c9abd01 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2019-10-03 15:29:57,832+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-6) [1796392d] START, SetAmountOfMemoryVDSCommand(HostName = Mars, Params:{hostId='80de3bc7-25e3-4ccf-9c8c-dce45c2638e0', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='790aff82-ce57-4791-b666-dc5e282212db', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=128]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-790aff82-ce57-4791-b666-dc5e282212db', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='5461'}), log id: 2cae0b6
2019-10-03 15:29:57,882+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-6) [1796392d] FINISH, SetAmountOfMemoryVDSCommand, return: , log id: 2cae0b6
2019-10-03 15:29:58,254+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6) [1796392d] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the amount of memory on VM HostedEngine from 5120 to 5120
2019-10-03 15:29:58,289+03 INFO  [org.ovirt.engine.core.bll.HotSetAmountOfMemor Command] (default task-6) [2737052] Running command: HotSetAmountOfMemoryCommand internal: true. Entities affected :  ID: f058c188-43c5-4685-88e0-c88b3c9abd01 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2019-10-03 15:29:58,290+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-6) [2737052] START, SetAmountOfMemoryVDSCommand(HostName = Mars, Params:{hostId='80de3bc7-25e3-4ccf-9c8c-dce45c2638e0', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='cf4da387-4fd1-43f0-8796-d30b9ae59c3b', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=2944]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-cf4da387-4fd1-43f0-8796-d30b9ae59c3b', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='5461'}), log id: 4f918487
2019-10-03 15:29:58,339+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-6) [2737052] FINISH, SetAmountOfMemoryVDSCommand, return: , log id: 4f918487
2019-10-03 15:29:58,366+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6) [2737052] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the amount of memory on VM HostedEngine from 5120 to 5120
    <ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>13107200</maxMemory>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
      <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2560</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>10485760</maxMemory>
  <memory unit='KiB'>2621440</memory>
  <currentMemory unit='KiB'>2621440</currentMemory>
      <cell id='0' cpus='0-2' memory='2621440' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2560</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>10485760</maxMemory>
  <memory unit='KiB'>2621440</memory>
  <currentMemory unit='KiB'>2621440</currentMemory>
      <cell id='0' cpus='0-2' memory='2621440' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2512</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>6193152</maxMemory>
  <memory unit='KiB'>2572288</memory>
  <currentMemory unit='KiB'>2572288</currentMemory>
      <cell id='0' cpus='0-1' memory='2572288' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2512</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>6193152</maxMemory>
  <memory unit='KiB'>2572288</memory>
  <currentMemory unit='KiB'>2572288</currentMemory>
      <cell id='0' cpus='0-1' memory='2572288' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>13107200</maxMemory>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
      <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
    <ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>13107200</maxMemory>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
      <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>8388608</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
      <cell id='0' cpus='0' memory='2097152' unit='KiB'/>
    <ovirt-vm:minGuaranteedMemoryMb type="int">2048</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>8388608</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
      <cell id='0' cpus='0' memory='2097152' unit='KiB'/>
2019-10-03 16:10:36,888+03 INFO  [org.ovirt.engine.core.bll.HotSetAmountOfMemor Command] (default task-17) [4a6922aa] Running command: HotSetAmountOfMemoryCommand internal: true. Entities affected :  ID: f058c188-43c5-4685-88e0-c88b3c9abd01 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2019-10-03 16:10:36,889+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-17) [4a6922aa] START, SetAmountOfMemoryVDSCommand(HostName = Deimos, Params:{hostId='0fafebe7-14e8-4e4f-916c-d56b7b5150f8', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='c0688892-92d8-49ce-bf31-b6631054aa6f', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-c0688892-92d8-49ce-bf31-b6631054aa6f', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='5461'}), log id: 7441f9dd
2019-10-03 16:10:36,934+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-17) [4a6922aa] FINISH, SetAmountOfMemoryVDSCommand, return: , log id: 7441f9dd
2019-10-03 16:10:36,944+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-17) [4a6922aa] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the amount of memory on VM HostedEngine from 5120 to 5120
    <ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>13107200</maxMemory>
  <memory unit='KiB'>11534336</memory>
  <currentMemory unit='KiB'>11534336</currentMemory>
      <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
    <ovirt-vm:minGuaranteedMemoryMb type="int">8192</ovirt-vm:minGuaranteedMemoryMb>
  <maxMemory slots='16' unit='KiB'>13107200</maxMemory>
  <memory unit='KiB'>11534336</memory>
  <currentMemory unit='KiB'>11534336</currentMemory>
      <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
    <memory model='dimm'>
    </memory>
2019-10-03 16:22:39,127+03 INFO  [org.ovirt.engine.core.bll.HotSetAmountOfMemor Command] (default task-19) [6947fc47] Running command: HotSetAmountOfMemoryCommand internal: true. Entities affected :  ID: f058c188-43c5-4685-88e0-c88b3c9abd01 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2019-10-03 16:22:39,128+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-19) [6947fc47] START, SetAmountOfMemoryVDSCommand(HostName = Deimos, Params:{hostId='0fafebe7-14e8-4e4f-916c-d56b7b5150f8', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='4ccd8407-e15f-4319-8eb7-cb56012bc5ab', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-4ccd8407-e15f-4319-8eb7-cb56012bc5ab', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='8192'}), log id: 54b3104e
2019-10-03 16:22:39,136+03 ERROR [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-19) [6947fc47] Failed in 'SetAmountOfMemoryVDS' method
2019-10-03 16:22:39,141+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-19) [6947fc47] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM Deimos command SetAmountOfMemoryVDS failed: unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config
2019-10-03 16:22:39,141+03 ERROR [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-19) [6947fc47] Command 'SetAmountOfMemoryVDSCommand(HostName = Deimos, Params:{hostId='0fafebe7-14e8-4e4f-916c-d56b7b5150f8', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='4ccd8407-e15f-4319-8eb7-cb56012bc5ab', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-4ccd8407-e15f-4319-8eb7-cb56012bc5ab', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='8192'})' execution failed: VDSGenericException: VDSErrorException: Failed to SetAmountOfMemoryVDS, error = unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config, code = 70
2019-10-03 16:22:39,141+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-19) [6947fc47] FINISH, SetAmountOfMemoryVDSCommand, return: , log id: 54b3104e
2019-10-03 16:22:39,149+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-19) [6947fc47] EVENT_ID: FAILED_HOT_SET_MEMORY(2,040), Failed to hot set memory to VM HostedEngine. Underlying error message: unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config

Comment 38 Dionysis Kladis 2019-10-22 21:33:26 UTC
with the ovirt hosted engine configuration memory hotplug enabled so i tested again on the latest updated hosted  engine from yum list " ovirt-engine.noarch                   4.3.6.7-1.el7                 "

and the vdsm log is 

tail -f /var/log/ovirt-engine/engine.log |grep memory
2019-10-23 00:27:04,957+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-4) [8c80ea10-a46c-4385-adf7-c512d89b1626] EVENT_ID: FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE(2,048), Failed to hot plug memory to VM HostedEngine. Amount of added memory (341MiB) is not dividable by 256MiB.
2019-10-23 00:28:01,233+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-2) [14f2559c] START, SetAmountOfMemoryVDSCommand(HostName = Phobos, Params:{hostId='7e5f4fb1-285c-4767-8a34-c0190472eeb5', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='2ad5b156-bfcc-40af-8fb8-95a53cf8a8df', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-2ad5b156-bfcc-40af-8fb8-95a53cf8a8df', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='8192'}), log id: 7ea7f25
2019-10-23 00:28:01,246+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-2) [14f2559c] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM Phobos command SetAmountOfMemoryVDS failed: unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config
2019-10-23 00:28:01,246+03 ERROR [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-2) [14f2559c] Command 'SetAmountOfMemoryVDSCommand(HostName = Phobos, Params:{hostId='7e5f4fb1-285c-4767-8a34-c0190472eeb5', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='2ad5b156-bfcc-40af-8fb8-95a53cf8a8df', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-2ad5b156-bfcc-40af-8fb8-95a53cf8a8df', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='8192'})' execution failed: VDSGenericException: VDSErrorException: Failed to SetAmountOfMemoryVDS, error = unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config, code = 70
2019-10-23 00:28:01,256+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-2) [14f2559c] EVENT_ID: FAILED_HOT_SET_MEMORY(2,040), Failed to hot set memory to VM HostedEngine. Underlying error message: unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config


and the engine log is 

tail -f /var/log/ovirt-engine/engine.log |grep memory
2019-10-23 00:31:25,315+03 INFO  [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-4) [3a0591d5] START, SetAmountOfMemoryVDSCommand(HostName = Phobos, Params:{hostId='7e5f4fb1-285c-4767-8a34-c0190472eeb5', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='98abbe0a-1773-48d6-b45a-7842818991d4', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-98abbe0a-1773-48d6-b45a-7842818991d4', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='8192'}), log id: 10879343
2019-10-23 00:31:25,328+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-4) [3a0591d5] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM Phobos command SetAmountOfMemoryVDS failed: unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config
2019-10-23 00:31:25,328+03 ERROR [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-4) [3a0591d5] Command 'SetAmountOfMemoryVDSCommand(HostName = Phobos, Params:{hostId='7e5f4fb1-285c-4767-8a34-c0190472eeb5', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='98abbe0a-1773-48d6-b45a-7842818991d4', vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory', type='MEMORY', specParams='[node=0, size=3072]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='ua-98abbe0a-1773-48d6-b45a-7842818991d4', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='8192'})' execution failed: VDSGenericException: VDSErrorException: Failed to SetAmountOfMemoryVDS, error = unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config, code = 70
2019-10-23 00:31:25,340+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-4) [3a0591d5] EVENT_ID: FAILED_HOT_SET_MEMORY(2,040), Failed to hot set memory to VM HostedEngine. Underlying error message: unsupported configuration: Attaching memory device with size '3145728' would exceed domain's maxMemory config

Comment 39 Marina Kalinin 2019-11-11 20:05:27 UTC
So, what is the workaround for this BZ for the someone that messed up their deployments and didn't give enough memory to HE VM and now is experience slow downs? Starting the vm from a config file?
I think the severity of this BZ should be higher, if today a user cannot add memory to HE VM via the Admin Portal.

https://access.redhat.com/solutions/2209751

Comment 40 Ryan Barry 2019-11-11 21:45:33 UTC
Reducing the severity because it's a 2 year old bug with a documented workaround.

Sure, it's ugly, and it may just be a wrong property (needs investigation), but https://bugzilla.redhat.com/show_bug.cgi?id=1685569 also needs a look, since there may be an issue with the balloon driver on the HE VM

Comment 41 Marina Kalinin 2019-11-11 22:39:34 UTC
Should this BZ depend on the upstream bz 1685569 then?

Comment 42 Ryan Barry 2019-11-11 22:40:58 UTC
No, or not yet, until it's investigated

Comment 43 Germano Veit Michel 2019-11-25 02:09:13 UTC
Same as comment #28, which I can also reproduce on 4.3.6. Only min_allocated_mem is being raised, OVFs still have old value too.

Comment 44 Clint Goudie 2019-12-08 18:47:18 UTC
Same as comment 43. I'm completely unable to change the minimum memory through the UI, or through running a custom VM config as RH support articles suggest. The OVF has the old value.

Comment 45 RHV bug bot 2020-01-08 14:47:03 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 46 RHV bug bot 2020-01-08 15:15:38 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 47 RHV bug bot 2020-01-24 19:48:46 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 54 Ryan Barry 2020-03-19 12:32:11 UTC
*** Bug 1685569 has been marked as a duplicate of this bug. ***

Comment 55 Nikolai Sednev 2020-04-01 15:11:03 UTC
I've changed memory via UI from 16G to 32G and got confirmation from the vdsm.log:

2020-04-01 18:00:38,151+0300 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.hotplugMemory succeeded in 0.05 sec
onds (__init__:312)
2020-04-01 18:00:38,191+0300 INFO  (jsonrpc/7) [api.virt] START hotplugMemory(params={'memGuaranteedSize': 32768, 'mem
ory': {'node': 0, 'size': 16256, 'alias': 'ua-ac197fce-f62a-4e66-929d-0459d003e91f', 'type': 'memory', 'specParams': {
'node': '0', 'size': '16256'}, 'device': 'memory', 'deviceId': 'ac197fce-f62a-4e66-929d-0459d003e91f'}, 'vmId': '9e2e8
1a5-64be-444e-b1c5-d9463f3eee35'}) from=::ffff:10.35.92.52,56710, flow_id=74130de, vmId=9e2e81a5-64be-444e-b1c5-d9463f
3eee35 (api:48)
2020-04-01 18:00:38,296+0300 INFO  (jsonrpc/7) [api.virt] FINISH hotplugMemory return={'status': {'code': 0, 'message'
: 'Done'}, 'vmList': {}} from=::ffff:10.35.92.52,56710, flow_id=74130de, vmId=9e2e81a5-64be-444e-b1c5-d9463f3eee35 (ap
i:54)
2020-04-01 18:00:38,297+0300 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.hotplugMemory succeeded in 0.10 sec
onds (__init__:312)

On engine memory check shown:
nsednev-he-2 ~]# lsmem 
Memory block size:       128M
Total online memory:      32G
Total offline memory:      0B

In UI I see that memory size was properly updated in Memory Size field its 32768 MB, Physical Memory Guaranteed 32768 MB.

Works as expected.

Deployment of HE 4.4 on NFS.

Tested on host with these components:
rhvm-appliance.x86_64 2:4.4-20200326.0.el8ev
ovirt-hosted-engine-setup-2.4.4-1.el8ev.noarch
ovirt-hosted-engine-ha-2.4.2-1.el8ev.noarch
Red Hat Enterprise Linux release 8.2 Beta (Ootpa)
Linux 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Engine:
ovirt-engine-setup-base-4.4.0-0.26.master.el8ev.noarch
ovirt-engine-4.4.0-0.26.master.el8ev.noarch
openvswitch2.11-2.11.0-48.el8fdp.x86_64
Linux 4.18.0-192.el8.x86_64 #1 SMP Tue Mar 24 14:06:40 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 8.2 Beta (Ootpa)

Comment 60 errata-xmlrpc 2020-08-04 13:16:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.