Bug 1674352 - [RFE] Block changing the Initial Run configuration in a VM-Pool
Summary: [RFE] Block changing the Initial Run configuration in a VM-Pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.8
Hardware: All
OS: Linux
medium
medium
Target Milestone: ovirt-4.3.5
: 4.3.5
Assignee: Steven Rosenberg
QA Contact: Polina
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-11 05:48 UTC by nijin ashok
Modified: 2020-08-03 15:29 UTC (History)
10 users (show)

Fixed In Version: ovirt-engine-4.3.5.2
Doc Type: Bug Fix
Doc Text:
When a user edited the initial run data of a Virtual Machine created by a pool, it caused ambiguous results by allowing different Virtual Machines to have different values within the same pool even though they were created when creating the pool. In this release, the user cannot modify an individual Virtual Machine's initial run data when the Virtual Machine is part of a pool.
Clone Of:
Environment:
Last Closed: 2019-08-12 11:53:27 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4098861 0 Configure None Why initial Run configuration of a VM Pool is getting changed after editing a pool VM 2019-05-01 15:42:44 UTC
Red Hat Product Errata RHEA-2019:2431 0 None None None 2019-08-12 11:53:39 UTC
oVirt gerrit 99594 0 'None' MERGED engine: Prevent updating of init run pool VMs 2020-11-16 16:18:00 UTC
oVirt gerrit 100909 0 'None' ABANDONED engine: Prevent updating of init run pool VMs 2020-11-16 16:18:00 UTC
oVirt gerrit 100932 0 'None' MERGED engine: Prevent updating of init run pool VMs 2020-11-16 16:18:00 UTC

Description nijin ashok 2019-02-11 05:48:36 UTC
Description of problem:

Created a pool with 5 VMs with below initial run configuration.

====
/ovirt-engine/api/vmpools/ |grep -A8 "<initialization>"

            <initialization>
                <authorized_ssh_keys></authorized_ssh_keys>
                <custom_script></custom_script>
                <host_name>test.com</host_name>
                <nic_configurations/>
                <regenerate_ssh_keys>false</regenerate_ssh_keys>
                <timezone>Pacific/Honolulu</timezone>
                <user_name></user_name>
            </initialization>
====

Edited the initial run configuration of the VM test_pool-2, changed the host_name to test2.com and also changed the timezone. The VM pool configuration also get updated once we change this value.

===
        <initialization>
            <authorized_ssh_keys></authorized_ssh_keys>
            <custom_script></custom_script>
            <host_name>test2.com</host_name>
            <nic_configurations/>
            <regenerate_ssh_keys>false</regenerate_ssh_keys>
            <timezone>Asia/Jerusalem</timezone>
            <user_name></user_name>
        </initialization>
===

Increased the VM count of the pool by 2. Both the 2 new VMs were created with the initialization value of test_pool-2 VM because the pool has that configuration now.

====
/api/vms/261b6f5e-43f0-47b0-9b17-7d101439228f |egrep -A8 "<initialization>"
    <initialization>
        <authorized_ssh_keys></authorized_ssh_keys>
        <custom_script></custom_script>
        <host_name>test2.com</host_name>
        <nic_configurations/>
        <regenerate_ssh_keys>false</regenerate_ssh_keys>
        <timezone>Asia/Jerusalem</timezone>
        <user_name></user_name>
    </initialization>

===

Version-Release number of selected component (if applicable):

RHV 4.2.7


How reproducible:

100 %

Steps to Reproduce:

1. Create a pool with "initial run" configuration.
2. Edit the "initial run" configuration of the VMs which is part of the pool. The pool configuration will be also updated with the same info.


Actual results:

Pool configuration getting updated while editing the VMs conf.

Expected results:

I think we should not allow editing VMs "initial run" configuration if it's part of a pool just like other properties like "cpu/memory" etc.. If we are allowing it, this should not change the VMs pool configuration.

Additional info:

Comment 1 Ryan Barry 2019-02-12 00:15:35 UTC
I'm in favor of simply disallowing this.

On the one hand, it's expected behavior -- pools are not individual VMs, and they share these properties. On the other hand, that doesn't seem to be intuitive

Comment 4 Steven Rosenberg 2019-03-19 14:43:44 UTC
It is not clear to me where the "initial run configuration" resides.

You give this in the description:

/ovirt-engine/api/vmpools/ |grep -A8 "<initialization>"

but the command (ls ?) is missing and I do not have a vmpools directory in my development environment.

Could send me your file if it is an xml file and/or provide more details on the exact location of the configuration.

Thank you.

Comment 5 nijin ashok 2019-03-20 01:46:53 UTC
(In reply to Steven Rosenberg from comment #4)
> It is not clear to me where the "initial run configuration" resides.
> 
> You give this in the description:
> 
> /ovirt-engine/api/vmpools/ |grep -A8 "<initialization>"
> 

That's an API call and not filesystem.  Please see this https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html-single/rest_api_guide/#services-vm_pools-methods-list.

Comment 6 Steven Rosenberg 2019-03-21 14:50:13 UTC
I tested this on Ovirt 4.3 and 4.2.8. I performed the following steps:

1. Created a VM1 and set the Initial Run flags with VM Host Name VM1 and the time zone for Hawaiian.
2. Also created a VM2
3. Made a Template VM1.
4. Created POOL1 with 5 VMs.
5. Used the PUT command to modify the values.

Results, I was able to update the Description of the Pool, but neither the Host Name nor the timezone in the initialization section updated for the POOL, nor the POOL VMs.

This is the initialization section (for 4.2.8):

<initialization>
<authorized_ssh_keys />
<custom_script />
<host_name>VM2</host_name>
<nic_configurations />
<regenerate_ssh_keys>false</regenerate_ssh_keys>
<timezone>Asia/Jerusalem</timezone>
<user_name />
</initialization>

This is the full message body:

<vm_pool href="/ovirt-engine/api/vmpools/aa371a7b-f746-4753-b5a3-43e43c6c2bef" id="aa371a7b-f746-4753-b5a3-43e43c6c2bef">
<actions>
<link href="/ovirt-engine/api/vmpools/aa371a7b-f746-4753-b5a3-43e43c6c2bef/allocatevm" rel="allocatevm" />
</actions>
<name>POOL1</name>
<description>Virtual Machine Pool B</description>
<link href="/ovirt-engine/api/vmpools/aa371a7b-f746-4753-b5a3-43e43c6c2bef/permissions" rel="permissions" />
<auto_storage_select>false</auto_storage_select>
<max_user_vms>1</max_user_vms>
<prestarted_vms>0</prestarted_vms>
<size>5</size>
<stateful>false</stateful>
<type>automatic</type>
<use_latest_template_version>true</use_latest_template_version>
<cluster href="/ovirt-engine/api/clusters/6e2cb15a-fe25-11e8-a4cc-8c16450ea519" id="6e2cb15a-fe25-11e8-a4cc-8c16450ea519" />
<template href="/ovirt-engine/api/templates/eb72fcb4-86d4-45de-bbf3-de7ee1df9090" id="eb72fcb4-86d4-45de-bbf3-de7ee1df9090" />
<vm href="/ovirt-engine/api/vms/7500390d-9ab0-486a-831f-9b2baeb33160" id="7500390d-9ab0-486a-831f-9b2baeb33160">
<name>POOL1-1</name>
<description />
<comment />
<bios>
<boot_menu>
<enabled>false</enabled>
</boot_menu>
</bios>
<cpu>
<architecture>x86_64</architecture>
<topology>
<cores>1</cores>
<sockets>1</sockets>
<threads>1</threads>
 </topology>
 </cpu>
<cpu_shares>0</cpu_shares>
<creation_time>2019-03-21T16:27:49.566+02:00</creation_time>
<delete_protected>false</delete_protected>
<display>
<allow_override>false</allow_override>
<copy_paste_enabled>true</copy_paste_enabled>
<disconnect_action>LOCK_SCREEN</disconnect_action>
<file_transfer_enabled>true</file_transfer_enabled>
<monitors>1</monitors>
<single_qxl_pci>false</single_qxl_pci>
<smartcard_enabled>false</smartcard_enabled>
<type>spice</type>
 </display>
<high_availability>
<enabled>false</enabled>
<priority>1</priority>
 </high_availability>
<initialization>
<authorized_ssh_keys />
<custom_script />
<host_name>VM2</host_name>
<nic_configurations />
<regenerate_ssh_keys>false</regenerate_ssh_keys>
<timezone>Asia/Jerusalem</timezone>
<user_name />
 </initialization>
<io>
<threads>1</threads>
</io>
<memory>1073741824</memory>
<memory_policy>
<guaranteed>1073741824</guaranteed>
<max>4294967296</max>
 </memory_policy>
<migration>
<auto_converge>inherit</auto_converge>
<compressed>inherit</compressed>
 </migration>
<migration_downtime>-1</migration_downtime>
<multi_queues_enabled>true</multi_queues_enabled>
<origin>ovirt</origin>
<os>
<boot>
<devices>
<device>hd</device>
</devices>
</boot>
<type>other</type>
 </os>
<placement_policy>
<affinity>migratable</affinity>
</placement_policy>
<small_icon id="8876f008-dab6-acff-13d1-3735e98cd485" />
<sso>
<methods>
<method id="guest_agent" />
</methods>
</sso>
<start_paused>false</start_paused>
<stateless>true</stateless>
<storage_error_resume_behaviour>auto_resume</storage_error_resume_behaviour>
<time_zone>
<name>Etc/GMT</name>
</time_zone>
<type>desktop</type>
<usb>
<enabled>true</enabled>
<type>native</type>
 </usb>
<next_run_configuration_exists>false</next_run_configuration_exists>
<numa_tune_mode>interleave</numa_tune_mode>
<status>down</status>
<stop_time>2019-03-21T16:27:49.568+02:00</stop_time>
<use_latest_template_version>true</use_latest_template_version>
 </vm>
 </vm_pool>

Comment 7 nijin ashok 2019-03-22 06:47:21 UTC
(In reply to Steven Rosenberg from comment #6)

> Results, I was able to update the Description of the Pool, but neither the
> Host Name nor the timezone in the initialization section updated for the
> POOL, nor the POOL VMs.

It depends on which VM you are editing. I can see that the pool will be linked to a single VM and any change to that particular VM can modify the pool.

For example, I have this.

====
/ovirt-engine/api/vmpools/ |egrep   "name|timezone"


        <name>test_pool</name>        => pool name
            <name>test_pool-5</name>  => VM linked
                <timezone>America/Indianapolis</timezone> => timezone in initial run
===


Changed the timezone of the VM test_pool-5 from the RHV-M GUI. The pool configuration also got changed.

===
/ovirt-engine/api/vmpools/ |egrep   "name|timezone"
        <name>test_pool</name>
            <name>test_pool-5</name>
                <timezone>America/Chicago</timezone>  timezone of pool also got changed
===

I didn't know the logic of how a particular VM is selected for pool configuration. However, the "linked VM" sometimes change randomly and so, sometimes I have to edit all the VMs to reproduce the issue.

In my opinion, I think we should simply disable the editing of the initial run of pool VMs as almost all others are disabled like memory,cpu etc.

Comment 8 Steven Rosenberg 2019-03-24 15:37:07 UTC
I looked at this issue again. Regardless of which VM one specifies, the initialize section values are not changed by the rest api when we attempt to update the initial run time zone.

It seems from debugging the code that these are the only values that are actually updated and saved:

                .addValue("vm_pool_description", pool.getVmPoolDescription())
                .addValue("vm_pool_comment", pool.getComment())
                .addValue("vm_pool_id", pool.getVmPoolId())
                .addValue("vm_pool_name", pool.getName())
                .addValue("vm_pool_type", pool.getVmPoolType())
                .addValue("stateful", pool.isStateful())
                .addValue("parameters", pool.getParameters())
                .addValue("prestarted_vms", pool.getPrestartedVms())
                .addValue("cluster_id", pool.getClusterId())
                .addValue("max_assigned_vms_per_user", pool.getMaxAssignedVmsPerUser())
                .addValue("spice_proxy", pool.getSpiceProxy())
                .addValue("is_being_destroyed", pool.isBeingDestroyed())
                .addValue("is_auto_storage_select", pool.isAutoStorageSelect());

Also in the UI when editing a vmpool, the "initial run" section controls are mostly disabled ate that level.

Prehaps we should retest this with the most current version to ensure we are in sync.

Comment 9 nijin ashok 2019-03-26 09:31:01 UTC
(In reply to Steven Rosenberg from comment #8)

> Also in the UI when editing a vmpool, the "initial run" section controls are
> mostly disabled ate that level.

We have to edit the VM which is part of the pool instead of editing the pool itself to reproduce the issue.

> Prehaps we should retest this with the most current version to ensure we are
> in sync.

I have tested with 4.3 with the same results.

Comment 20 Martin Tessun 2019-04-23 02:56:28 UTC
As it doesn't make sense (conceptually) to have a vm pool where different VMs have different RunOnce configurations, this should be blocked by default.

Changing the description to reflect this change.

Comment 21 Shmuel Melamud 2019-04-24 17:45:56 UTC
I doubt we should block editing of VM init configuration of individual VMs in a VM pool. Generally, we do not block without a strong reason the functionality that was enabled for a long time, because some users may make use of it. For example, there are scenarios when VMs in a pool have different usernames/passwords in the VM init configuration - they are set by the script that is run after the pool is created. I believe that editing of the VM init configuration of individual VMs in the pool was allowed on purpose.

Yes, we have "reference VM" in the pool and editing its VM init configuration changes the default configuration that is used for new VMs in the pool. But if you never need to change the VM init configuration of an individual VM, you just not change it and the problem does not appear. But if sometimes you do need to change the VM init configuration of an individual VM, this functionality should not be blocked.

Comment 22 Ryan Barry 2019-04-24 17:57:45 UTC
Let's put it differently -

See comment#7. Trying to edit the pool VMs isn't deterministic. It doesn't always return the same VM, and there's no consistency in the user experience. VMs which are not identical do no make sense in a pool, conceptually -- sealed/sysprepped VMs which use centralized auth for VDI or similar are the primary use case here. If the VMs use different scripts for configuration (or different usernames/passwords), a pool is not the appropriate use case.

Let's assert the alternative. What's a "valid" use case where an individual VM which is part of a pool would need to be edited? Sure, it can currently be done, but all other fields (memory, etc) are blocked because this is _not_ an intentional omission -- it was missed, and it doesn't work as expected when it's attempted. Blocking editing makes the initial run match the user experience of other pool values

Comment 23 Shmuel Melamud 2019-04-24 22:08:38 UTC
You are right that conceptually all VMs in a pool must be interchangeable. But it is not how pools are used in many cases. We even have stateful VM pools as feature, a self-contradictory combination that reflects the real-world usage scenario of the pools. Conceptually, I support blocking VM init configuration editing, but I suspect it will break some real-world use cases that I heard about.

Comment 24 Michal Skrivanek 2019-04-25 08:30:53 UTC
I agree with you, Shmuel. I realized the posted change as is right now affects manual stateful pools as well, and the intention of those is indeed that the users/owners keep them for a longer time and customize. The only problem I see is that there are many other problematic properties (just check Edit VM dialog of a pool VM, it's really a rather semi-random mess of things you can and cannot change) - but for this particular vm init case, if if can be fixed easily so it works correctly then let's fix it.

Comment 25 Michal Skrivanek 2019-04-25 08:45:19 UTC
to add more explanation:
manual pool means that once user claims a VM from the pool it stays assigned to that user, they can shut it down, start again, edit it (to some degree), and only the admin can take it away and at that time the VM state/config is reverted. And with bug 1234394 (stateful pools) it's not even reverted then.
I hope it clarifies the use case

Now the behavior for pool editing is indeed that a random (first) VM is taken as a reference assuming all of them are the same. It will sort of work for stateless automatic pools (a change will be reverted on power down anyway), for other types it's not so nice...

maybe we could create a separate dummy unassignable vm to hold the master pool configuration or add a field to vm_pools holding the ovf snapshot? It's a lot more complicated, but it would fix it for good.

Comment 26 Steven Rosenberg 2019-04-29 10:52:20 UTC
The issue is really only with the editing of the VM's Initial Run data. The user can still modify the Initial Run data when choosing Run Once, so the Blocking as requested here really does not affect usability and does prevent the defaults from being different between the actual POOL and each individual VM.

In other words there is a valid logic to the current request which will avoid a more complex solution of making the POOL an independent entity from the VMs that it spawned.

Comment 27 Shmuel Melamud 2019-04-30 11:55:25 UTC
(In reply to Michal Skrivanek from comment #25)
> maybe we could create a separate dummy unassignable vm to hold the master
> pool configuration or add a field to vm_pools holding the ovf snapshot? It's
> a lot more complicated, but it would fix it for good.

Yes, it would be a good solution, but to keep a VM in the VM list that is not a true VM we will need to modify all VM queries to exclude it. And we will need to create separate queries to access it and change the commands accordingly. It is enormous amount of work.

Comment 28 Michal Skrivanek 2019-04-30 12:05:01 UTC
then the snapshot field?

Comment 29 Shmuel Melamud 2019-04-30 13:05:52 UTC
(In reply to Michal Skrivanek from comment #28)
> then the snapshot field?

The variant with snapshot field seems doable. Need to search for VmPoolDao.getVmDataFromPoolByPoolGuid() usages and change them. And the saving also.

Comment 30 Polina 2019-05-12 11:20:50 UTC
verified on ovirt-engine-4.4.0-0.0.master.20190509133331.gitb9d2a1e.el7.noarch

Edit of Initial Run Data fails with error:
Cannot edit VM. The Initial Run Data cannot be modified for Stateless Pool Virtual Machines.

Comment 32 Polina 2019-06-18 10:26:52 UTC
tested on ovirt-engine-4.3.5-0.1.el7.noarch

The behavior not exactly as described in Fix:/Result: of Doc Text:

1. Create a pool with 5 VMs with Initial Run configuration.
2. Edit one VM pool-2 from the pool changing the Initial run - change timezone and host - allowed.
3. Increase the pool by two VMS

Result: the new added VMs have the original Initial run configuration.
All VMs in the pool have the same Initial run configuration, except the edited one.

I understand that it was decided not to allow editing pool VM , right?

Comment 33 Steven Rosenberg 2019-06-18 11:01:25 UTC
(In reply to Polina from comment #32)
> tested on ovirt-engine-4.3.5-0.1.el7.noarch
> 
> The behavior not exactly as described in Fix:/Result: of Doc Text:
> 
> 1. Create a pool with 5 VMs with Initial Run configuration.
> 2. Edit one VM pool-2 from the pool changing the Initial run - change
> timezone and host - allowed.
> 3. Increase the pool by two VMS
> 
> Result: the new added VMs have the original Initial run configuration.
> All VMs in the pool have the same Initial run configuration, except the
> edited one.
> 
> I understand that it was decided not to allow editing pool VM , right?

This issue was verified in Comment 30 for 4.4. I am back porting the issue to 4.3.z so that it can be tested in the next release.

Comment 34 Michal Skrivanek 2019-06-19 08:37:49 UTC
this is NOT ON_QA. Either have a ga and a clone bzs, or make sure the status is accurate respective to the current target milestone

Comment 36 Polina 2019-06-23 08:50:13 UTC
Tested on ovirt-engine-4.3.5.1-0.1.el7.noarch . Editing of pool VMs is allowed

Comment 37 Steven Rosenberg 2019-06-23 09:15:54 UTC
(In reply to Polina from comment #36)
> Tested on ovirt-engine-4.3.5.1-0.1.el7.noarch . Editing of pool VMs is
> allowed

This issue was back ported on June 19 and was not included in ovirt-engine-4.3.5.1-0.1.el7.noarch. You will have to wait for the next 4.3.z release in order to test it.

See:

 git lg
* a43dc01 - (HEAD, origin/ovirt-engine-4.3, check_init_run) Run metrics role only if package installed (3 days ago) <Shirly Rad
* e660e88 - webadmin: restrict critical space action blocker value to be valid (4 days ago) <Ahmad Khiet>
* 801f0f6 - webadmin: Reflect plugin-contributed main place buttons into details view (4 days ago) <Vojtech Szocs>
* 8a37718 - engine: treat image transfers cancelled by user/system differently (4 days ago) <Fedor Gavrilov>
* 9cebf02 - core: detach MBS disks upon live migration failure (4 days ago) <Benny Zlotnik>
* 6fb0927 - engine: Prevent updating of init run pool VMs (5 days ago) <Steven Rosenberg>
* 9a8ad48 - build: post ovirt-engine-4.3.5.1 (5 days ago) <Sandro Bonazzola>
* 006e3df - (tag: ovirt-engine-4.3.5.1) build: ovirt-engine-4.3.5.1 (5 days ago) <Sandro Bonazzola>

Comment 38 Polina 2019-06-23 10:14:22 UTC
in such case the bz must not be on_qa.

Comment 39 Ryan Barry 2019-06-24 20:55:07 UTC
MODIFIED, since the patch is merged

Comment 41 Polina 2019-07-01 06:49:27 UTC
verified on ovirt-engine-4.3.5.2-0.1.el7.noarch.
Editing of the 'Initial Run' configuration for pool VMs is not allowed now, brings

Error while executing action: 
pool-2:
Cannot edit VM. The Initial Run Data cannot be modified for Stateless Pool Virtual Machines.

Comment 45 errata-xmlrpc 2019-08-12 11:53:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2431


Note You need to log in before you can comment on or make changes to this bug.