Bug 1779983 - After memory hot plug, Why the VM is showing icon for "server with the newer configuration for next run"?
Summary: After memory hot plug, Why the VM is showing icon for "server with the newer ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.6
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ovirt-4.4.7
: ---
Assignee: Milan Zamazal
QA Contact: Guilherme Santos
URL:
Whiteboard:
: 1944641 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-05 07:30 UTC by Kumar Mashalkar
Modified: 2023-09-07 21:10 UTC (History)
6 users (show)

Fixed In Version: ovirt-engine-4.4.7.4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-14 13:08:54 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
isaranov: testing_plan_complete+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4764461 0 None None None 2020-01-23 02:49:46 UTC
oVirt gerrit 115146 0 master MERGED core: Don't show next run memory configuration when unnecessary 2021-06-16 06:59:40 UTC

Description Kumar Mashalkar 2019-12-05 07:30:26 UTC
Description of problem:
After memory was hot-plugged to the VM, RHV-M is showing a warning message: "server with the newer configuration for next run" This is giving the false message to the user that memory change requires VM reboot.


Version-Release number of selected component (if applicable):
4.3


How reproducible:
100%


Steps to Reproduce:
1. Edit VM and increase the memory of the VM(within the max memory limit)
2. Save, this will hot plug the memory to the VM.
3. It keeps showing message: server with the newer configuration for next run". This gives impression to the users that VM needs to be rebooted to apply those changes as it says next run.


Actual results:
Shows warning: server with the newer configuration for next run

Expected results:
It should not show any message.


Additional info:
If this message is to denote in the next run the actual memory + the hot-plugged memory is going to be merged then there should be some proper message for it. Else user is getting confused with memory is not increasing on the fly and requires next run for it.

Comment 1 Ryan Barry 2019-12-08 15:53:08 UTC
Hey Kumar -

This is expected behavior. The memory is increased appropriately, but we want to make users aware that the configuration has in fact changed. This message is generic and applies to all configuration changes. We'll see if we can make it more granular, but it's relatively low priority considering that functionality still works

Comment 2 Kumar Mashalkar 2019-12-09 02:29:09 UTC
Hello Ryan,

Thank you for the confirmation. Yes, the memory increase functionality does work as expected. The only confusion is with the message displayed to "for next run" gives the impression that VM required reboot as we do require for changing max memory or guaranteed memory.

As we do not get any icon or warning message when we attach NIC or additional disk to the VM, we should not show that icon or warning message when we change defined memory size which is done when the VM is still running. Else appropriate message should be displayed for better user experience. 

Yes, this can be a lower priority as the functionality is working.

Thank you for your quick response to this.

Comment 3 Ryan Barry 2019-12-09 03:54:13 UTC
It's still not a working day in the US, but I'll check the logs in the morning.

Comment 4 Marina Kalinin 2020-01-16 21:59:27 UTC
Ryan, I agree with Kumar here. Let's review this and remove the message, if possible.

Comment 5 Marina Kalinin 2020-01-17 13:55:35 UTC
Hi Kumar,

The issue here is that memory hotplug depends on the balloon driver installed on the guest. If the driver is not installed, the engine would not know if it worked or a reboot is required. That's why the message is there.

We are leaving the BZ open for now to see if we can change the message in the future release. But for now we should have this behavior explained in a KCS. Can you please take care of that?

Comment 7 Ryan Barry 2020-05-28 11:50:48 UTC
This would require a large matrix of possible messages, and likely isn't a good use of engineering time for such a small enhancement to UX

Comment 9 Arik 2021-04-22 12:55:22 UTC
*** Bug 1944641 has been marked as a duplicate of this bug. ***

Comment 10 Milan Zamazal 2021-05-19 17:01:33 UTC
Let's clarify what's needed exactly. I think we can have the following basic situations:

- Memory is successfully hot plugged as requested. No next run configuration should be signaled.

- Memory is hot plugged but the guest OS hasn't processed the inserted RAM correctly and it's not available to it. I'm not sure whether we can detect this situation properly/easily, perhaps it can be (mostly correctly) considered a pending hot plug and no next run configuration is signaled again for simplicity?

- Memory hot plug fails in the sense the corresponding DIMM is not inserted to the VM at all. Should we signal next run configuration in this case?

- The requested memory is set to a value not aligned with the DIMM size, hot plug is successful, but the amount of the currently available memory doesn't match the requested value exactly due to the misalignment. What in this case?

As for the implementation, in case we pretend there is no next run configuration, I guess we may still need to keep it internally in order not to confuse snapshots (like seeing the new amount of memory + DIMM devices)?

Comment 11 Arik 2021-05-23 19:46:24 UTC
(In reply to Milan Zamazal from comment #10)
> Let's clarify what's needed exactly. I think we can have the following basic
> situations:
> 
> - Memory is successfully hot plugged as requested. No next run configuration
> should be signaled.

Yep

> 
> - Memory is hot plugged but the guest OS hasn't processed the inserted RAM
> correctly and it's not available to it. I'm not sure whether we can detect
> this situation properly/easily, perhaps it can be (mostly correctly)
> considered a pending hot plug and no next run configuration is signaled
> again for simplicity?

I'd say we don't want to have next-run configuration in that case
There was an argument earlier in this thread that we add next-run configuration in order to show a message as balloon driver might be missing but we generally don't monitor that the guest manages to consume the plugged devices (either NICs or disks and probably also vCPUs) - for example, we can plug a disk successfully to a guest with no operating system at all.

> 
> - Memory hot plug fails in the sense the corresponding DIMM is not inserted
> to the VM at all. Should we signal next run configuration in this case?

If the user asks to hot-plug a device and that fails, I don't think we shouldn't add the device to the next-run configuration as a fall back
In this case, I'd say we should rather fail the update

> 
> - The requested memory is set to a value not aligned with the DIMM size, hot
> plug is successful, but the amount of the currently available memory doesn't
> match the requested value exactly due to the misalignment. What in this case?

I think it is similar to the case the memory is hot-plugged but is not available to the guest - and we generally don't check what happens within the guest
If we can check "immediately" that the memory is plugged successfully to the guest it would be ideal to check this before reporting that the hot-plug succeeded
But if we can't do that, maybe we can alternatively check the stats and compare the reported memory against the expected memory - and in case of a mismatch show a warning

> 
> As for the implementation, in case we pretend there is no next run
> configuration, I guess we may still need to keep it internally in order not
> to confuse snapshots (like seeing the new amount of memory + DIMM devices)?

I don't see a reason to have next-run configuration, let's think of a concrete case of starting a VM that was previously started with 1G of RAM and then 256M of RAM were hot-plugged to it:
1. The domain XML will be set with new amount of memory which is in this case 1G + 256M
2. The engine doesn't write the DIMM devices on run-VM so the DIMM device that was plugged would not appear in the domain XML
3. When the engine gets the devices from VDSM, the existing unmanaged devices, including that DIMM device that remained in the database from previous run, would drop
It should be the same also when that VM is restored from snapshot

Comment 12 Arik 2021-05-23 19:49:12 UTC
> If we can check "immediately" that the memory is plugged successfully to the
> guest it would be ideal to check this before reporting that the hot-plug
> succeeded

But we should also not require having a guest-agent..

Comment 17 Lukas Svaty 2021-07-14 13:08:54 UTC
This bug has low overall severity and passed an automated regression 
suite, and is not going to be further verified by QE. If you believe 
special care is required, feel free to re-open to ON_QA status.


Note You need to log in before you can comment on or make changes to this bug.