New commit detected on ManageIQ/manageiq/euwe: https://github.com/ManageIQ/manageiq/commit/06f9617f4e61a231d2ce1a003b203ef3391739d6 commit 06f9617f4e61a231d2ce1a003b203ef3391739d6 Author: Adam Grare <agrare> AuthorDate: Tue Nov 22 07:45:30 2016 -0500 Commit: Satoe Imaishi <simaishi> CommitDate: Tue Jan 10 12:09:38 2017 -0500 Merge pull request #12550 from jhernand/adjust_memory_to_satisfy_ovirt_constraints Adjust memory reconfiguration for oVirt (cherry picked from commit 4994ec87256403c1a21d924abc4b8bd1cb406360) https://bugzilla.redhat.com/show_bug.cgi?id=1404316 .../manageiq/providers/redhat/infra_manager.rb | 107 +++++++++++++++++++-- .../providers/redhat/infra_manager_spec.rb | 65 +++++++++++-- 2 files changed, 157 insertions(+), 15 deletions(-)
Tested on CFME-5.7.1. For a running VM with sockets=1, cores per socket=1, and memory=100M, guaranteed memory=100M, VM reconfigure: CPU sockets 1->4, and memory 100M->256M. For RHV-4.0.5: ============= CPU sockets is updated to 4. Memory is updated to 356M. For RHV-3.6.10: ============== CPU sockets is updated to 4. Memory is updated to 100M. From CFME side reconfigure request ends up successfully, for both. It seems that for RHV-3.6 the memory update is not occurring, though on CFME side VM reconfiguration request ends successfully. Is it going to be fixed for RHV-3.6?
The fix should work exactly the same for 3.6 and 4.0. Are you completely sure that it didn't work for 3.6?
Ilanet is right, in version 3.6 changing the memory and the CPU simultaneously results in not changing the memory at all. Seems to be related to the fact that CFME sends two requests to RHV: one to change the memory and another one to change the CPUs. The second one resets the first. This is hack that was introduced in the 'ovirt' gem to protect against a bug in RHV: https://github.com/ManageIQ/ovirt/blob/master/lib/ovirt/vm.rb#L100 I am currently checking if this is a real bug in RHV. Anyhow, to address this we will either need to remove that hack or reverse the order in which we do the updates (first CPU and then memory). Ilanit, thanks for your help. I think you can mark this and failed QA, and move it back to assigned.
After studying this in depth I detected several issues: 1. Version 3.6 of RHV doesn't handle correctly the 'next_run=false' parameter, it ignores the 'false' value, the presence of 'next_run=...' makes it consider that the value is 'true'. As a result when we send the request to update the current configuration we are actually updating (again) the next run configuration. 2. Versions 3.6 and 4.0 of RHV return incorrectly the value of the 'vm.memory' attribute when 'next_run' is 'true'. Instead of returning the next run memory the always return the current run memory. I opened bug 1417201 to track that. 3. CFME sends the memory value with the requests to change the CPU (and probably other things), in theory to avoid a RHV bug that resets memory to 10 GiB when not set. I haven't been to reproduce that bug, at least not with RHV 3.6 and RHV 4.0. Issue number 1 won't be fixed, as 3.6 has been superseded by 4.0, and the issue doesn't exist there. Issue number 2 should be fixed in RHV. Issue number 3 should be fixed in CFME, but to do so we need to verify that the bug it tries to avoid doesn't really exist in the supported versions of RHV (3.6, 3.5, others?).
To fix this issue in CFME without waiting for the fixes in RHV, we can use the following pull request: Don't send 'next_run=false' to oVirt 3.6 https://github.com/ManageIQ/manageiq/pull/13677 Satoe, how should we handle this? Change the bug back to ON_QA? Open a new bug?
New commit detected on ManageIQ/manageiq/euwe: https://github.com/ManageIQ/manageiq/commit/a44104e64ce778dc3192c904e2ca69c5509f1c8f commit a44104e64ce778dc3192c904e2ca69c5509f1c8f Author: Adam Grare <agrare> AuthorDate: Thu Feb 2 08:22:05 2017 -0500 Commit: Satoe Imaishi <simaishi> CommitDate: Thu Feb 2 09:32:39 2017 -0500 Merge pull request #13677 from jhernand/do_not_send_next_run_false_to_ovirt Don't send 'next_run=false' to oVirt 3.6 (cherry picked from commit 7194f99b79815413e6cf75d42b6e078f678ff55f) https://bugzilla.redhat.com/show_bug.cgi?id=1404316 app/models/manageiq/providers/redhat/infra_manager.rb | 2 +- spec/models/manageiq/providers/redhat/infra_manager_spec.rb | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-)
Verified on CFME-5.7.1.1 For a running VM with sockets=1, cores per socket=1, and memory=100M, guaranteed memory=100M, VM reconfigure: CPU sockets 1->4, and memory 100M->256M. For RHV-4.0.5: ============= CPU sockets is updated to 4. Memory is updated to 356M. For RHV-3.6.8: ============== CPU sockets is updated to 4. Memory is updated to 356M.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0320.html