Bug 1659161 - Unable to edit pool that is delete protected
Summary: Unable to edit pool that is delete protected
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.4.1
: ---
Assignee: Steven Rosenberg
QA Contact: Polina
URL:
Whiteboard:
Depends On:
Blocks: 1720110
TreeView+ depends on / blocked
 
Reported: 2018-12-13 18:03 UTC by amashah
Modified: 2023-12-15 16:16 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, changing the template version of a VM pool created from a delete-protected VM made the VM pool non-editable and unusable. The current release fixes this issue: It prevents you from changing the template version of the VM pool whose VMs are delete-protected and fails with an error message.
Clone Of:
: 1720110 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:16:51 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)
Test1 - VMs still on Pool (15.46 KB, image/png)
2019-06-04 16:39 UTC, Steven Rosenberg
no flags Details
Test1 - A Template and its subversion (9.09 KB, image/png)
2019-06-04 16:41 UTC, Steven Rosenberg
no flags Details
Test1 - Pool still has 5 VMs (16.22 KB, image/png)
2019-06-04 16:42 UTC, Steven Rosenberg
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:17:21 UTC
oVirt gerrit 99935 0 'None' MERGED engine: Block Delete Protected Template change 2021-01-15 12:23:18 UTC
oVirt gerrit 100799 0 'None' MERGED engine: Block Delete Protected Template change 2021-01-15 12:23:18 UTC

Description amashah 2018-12-13 18:03:42 UTC
Description of problem:

Cannot edit a pool that was created from a delete protected template(based on a delete protected VM). 

"Error while executing action Edit VM Pool properties: Internal Engine Error"


Version-Release number of selected component (if applicable):
4.2.7

How reproducible:
100%

Steps to Reproduce:

Step 1) Create new template Test1_T from existing VM with Delete Protection enabled.

Note that Delete Protection is enabled for template Test1_T.

Step 2) Create new pool TestPool from Template Test1_T (base version) with Delete Protection enabled.  Configure Number of VMs=5.

Note that Delete Protection setting for TestPool is inherited from the Template.  Leave it enabled.

Note that all of the VMs in the TestPool have Delete Protection enabled.  (I believe that this is inherited from TestPool.)

Step 3) Create new template with Root Template Test1_T and Sub-Version Name 20181212-2.

Note that the new template Sub-Version has Delete Protection Enabled.

Step 4) Edit TestPool.  Change the Template to Test1_T Subversion 20181212-2.  

An "Operation Canceled" window pops up.   The message is "Error while executing action Edit VM Pool properties: Internal Engine Error".

After that, TestPool has 0 Assigned VMs.  TestPool can no longer be edited but it can be removed.

The VMs that used to be part of TestPool (TestPool-1, TestPool-2, ...) still exist.  You can manually disable Delete Protection and then delete these VMs.




Actual results:

Unable to update the pool.


Expected results:

Be able to update/edit the pool.


Additional info:

Work around this issue by remembering to disable Delete Protection on the Template after it's creation is completed.

Comment 3 Steven Rosenberg 2018-12-23 13:47:41 UTC
I Reviewed this issue. 

The edit button is disabled because the number of VMs became 0 and the vm_pool_map entries do not exist. 

The solutions may be:

1. To prevent the ability to change the template with diagnostics when Delete Protection is on because the current VMs cannot be removed.

2. Preserve the number of VMs (5 in the above scenario) by creating a new set of VMs for the new template / pool combination. It means we will have 10 VMs in this scenario.

Please advise which option is worth the effort.

Comment 4 Aram Agajanian 2018-12-24 07:49:12 UTC
(In reply to Steven Rosenberg from comment #3)
> 
> The solutions may be:
> 
> 1. To prevent the ability to change the template with diagnostics when
> Delete Protection is on because the current VMs cannot be removed.
> 
> 2. Preserve the number of VMs (5 in the above scenario) by creating a new
> set of VMs for the new template / pool combination. It means we will have 10
> VMs in this scenario.
> 
> Please advise which option is worth the effort.


From a customer (university VDI) perspective:

Solution 1 would be an improvement but it seems that when the pool's Template Sub-Version is set to latest you would also have to add diagnostics when a Template creation is attempted.

Solution 2 doesn't make sense for us because our pools are stateless.  There is no reason to preserve the old stateless VMs since they get reset when they are shutdown.

The Delete Protection setting for templates seems to be unique because it serves two purposes.  It protects the Template but it also gets inherited by the VMs in the pool.  

Perhaps, there could be two settings for templates: "Delete Protection" and (inherited) "VM Delete Protection".  Allow the user to disable the VM Delete Protection setting in the New Template window.

Comment 5 Steven Rosenberg 2018-12-24 09:30:34 UTC
(In reply to Aram Agajanian from comment #4)
> (In reply to Steven Rosenberg from comment #3)
> > 
> > The solutions may be:
> > 
> > 1. To prevent the ability to change the template with diagnostics when
> > Delete Protection is on because the current VMs cannot be removed.
> > 
> > 2. Preserve the number of VMs (5 in the above scenario) by creating a new
> > set of VMs for the new template / pool combination. It means we will have 10
> > VMs in this scenario.
> > 
> > Please advise which option is worth the effort.
> 
> 
> From a customer (university VDI) perspective:
> 
> Solution 1 would be an improvement but it seems that when the pool's
> Template Sub-Version is set to latest you would also have to add diagnostics
> when a Template creation is attempted.
> 
> Solution 2 doesn't make sense for us because our pools are stateless.  There
> is no reason to preserve the old stateless VMs since they get reset when
> they are shutdown.
> 
> The Delete Protection setting for templates seems to be unique because it
> serves two purposes.  It protects the Template but it also gets inherited by
> the VMs in the pool.  
> 
> Perhaps, there could be two settings for templates: "Delete Protection" and
> (inherited) "VM Delete Protection".  Allow the user to disable the VM Delete
> Protection setting in the New Template window.

Dear Aram,

Thank you for your response.

As I understand Solution 1 is preferred. What would you like the diagnostics to warn about concerning the Template sub-Version / Delete Protection and please clarify that it would be a warning but we would still allow for the creation of the template?

Comment 6 Aram Agajanian 2018-12-28 02:12:52 UTC
(In reply to Steven Rosenberg from comment #5)
> 
> Dear Aram,
> 
> Thank you for your response.
> 
> As I understand Solution 1 is preferred. What would you like the diagnostics
> to warn about concerning the Template sub-Version / Delete Protection and
> please clarify that it would be a warning but we would still allow for the
> creation of the template?

Hi Steven,

I'm not sure why this would be a warning and not an error since using a template with delete protection enabled causes the pool to become unusable the next time that its Template setting is changed.  I have come up with warning messages for the Edit Pool and New Template cases since that seems to be what you are asking for:

1) If the Pool is edited so that the Template setting is changed and the new template has Delete Protection enabled, this will cause the pool to break later on.  A warning would be:

"The specified Template (Name fghij, Sub-Version vwxyz) has Delete Protection enabled.  If this Template is used, Pool abcde will also have Delete Protection enabled.  A subsequent change to the Template setting of Pool abcde will cause the Pool to become unusable.  It is strongly recommended that Delete Protection be disabled on Template fghij, Sub-version vxwyx before proceeding.  If you do not wish to proceed, press the Cancel button."

2) Regarding the creation of a new Template Sub-Version with Delete Protection enabled when there is a Pool that uses the latest version of the same Template, a warning would be:

"Pool abcde uses the latest version of Template fghij.  If a new version of of Template fghij is created with Delete Protection enabled, Pool abcde will also have Delete Protection enabled.  Any subsequent change to the Template setting of Pool abcde will cause the Pool to become unusable.  If you do not wish to to proceed, press the Cancel button."


If the Delete Protection setting were to be added to the New Template window, then the user could disable it at the time that the template is created.

Comment 7 Steven Rosenberg 2018-12-30 08:35:32 UTC
Martin, please review and advise when you are available.

Comment 8 Martin Tessun 2019-01-09 10:02:49 UTC
Sounds reasonable to me. We can do this in docs for 4.3 probably, and target an "in-UI change" for 4.4.

Comment 9 Steven Rosenberg 2019-05-12 15:30:43 UTC
After some discussion, we implemented a solution where attempting to modify the Template when the Delete Protection is enabled will fail with an error. This avoids the problems with suggestion 1 where the user proceeds with the edit only to find one cannot edit the Pool again (because it is unstable) as well as suggestion 2 where the user does not heed the warning, creates the Sub-Template, modifies the Pool's Template and then finds that after changing the Template's Pool, the later is not editable and the VMs were not deleted. This limitation of Blocking the changing of the Template when they are Delete Protected prevents the instability.

Comment 10 Aram Agajanian 2019-05-30 16:35:35 UTC
An additional case related to this bug:

- When a pool's Template subversion is set to "latest" and then a new subversion of the template is created.

Comment 11 Ryan Barry 2019-05-30 16:36:51 UTC
Logs?

Comment 12 Aram Agajanian 2019-06-01 00:33:10 UTC
I uploaded the engine.log file to my Red Hat Support Case after performing the following tests on RHV 4.2.8:

Test 1 
======
1) Create a template named LinuxLab_T_latest_test_1 
2) Create a pool named LinuxLab_latest_test 
template = LinuxLab_T_latest_test_1 
template subversion = latest 
number of VMs = 5 
number of running VMs = 0 
3) Create a subversion template of LinuxLab_T_latest_test_1 

Results of Test 1 
=================
The following Pool VMs were automatically detached from the pool: 
   LinuxLab_latest_test-2 
   LinuxLab_latest_test-3
   LinuxLab_latest_test-4 
   LinuxLab_latest_test-5 
The final result was a pool with one VM, LinuxLab_latest_test-1 

Test 2 
======
1) Delete previous subversion of template LinuxLab_T_latest_test_1 
2) Detach remaining VM from LinuxLab_latest_test pool and delete all VMs. 
3) Create a new pool named LinuxLab_latest_test 
template = LinuxLab_T_latest_test_1 
template subversion = latest 
number of VMs = 5 
number of running VMs = 3 
4) Create a new subversion template of LinuxLab_T_latest_test_1 

Results of Test 2 
=================
Nothing happens when the creation of the subversion template completes. 
The final result is a LinuxLab_latest_test pool with 5 VMs (3 running), all of which are using the original template (LinuxLab_T_latest_test_1) not the subversion.

Comment 13 Steven Rosenberg 2019-06-02 08:07:16 UTC
(In reply to Aram Agajanian from comment #12)
> I uploaded the engine.log file to my Red Hat Support Case after performing
> the following tests on RHV 4.2.8:
> 
> Test 1 
> ======
> 1) Create a template named LinuxLab_T_latest_test_1 
> 2) Create a pool named LinuxLab_latest_test 
> template = LinuxLab_T_latest_test_1 
> template subversion = latest 
> number of VMs = 5 
> number of running VMs = 0 
> 3) Create a subversion template of LinuxLab_T_latest_test_1 
> 
> Results of Test 1 
> =================
> The following Pool VMs were automatically detached from the pool: 
>    LinuxLab_latest_test-2 
>    LinuxLab_latest_test-3
>    LinuxLab_latest_test-4 
>    LinuxLab_latest_test-5 
> The final result was a pool with one VM, LinuxLab_latest_test-1 
> 
> Test 2 
> ======
> 1) Delete previous subversion of template LinuxLab_T_latest_test_1 
> 2) Detach remaining VM from LinuxLab_latest_test pool and delete all VMs. 
> 3) Create a new pool named LinuxLab_latest_test 
> template = LinuxLab_T_latest_test_1 
> template subversion = latest 
> number of VMs = 5 
> number of running VMs = 3 
> 4) Create a new subversion template of LinuxLab_T_latest_test_1 
> 
> Results of Test 2 
> =================
> Nothing happens when the creation of the subversion template completes. 
> The final result is a LinuxLab_latest_test pool with 5 VMs (3 running), all
> of which are using the original template (LinuxLab_T_latest_test_1) not the
> subversion.

If the edit button is enabled after these tests, then they are different issues. This patch specifically addressed the delete protection issue and inability to edit afterwards. Please confirm. If so we may treat these issues separately.

Comment 15 Aram Agajanian 2019-06-03 23:47:58 UTC
(In reply to Steven Rosenberg from comment #13)
> 
> If the edit button is enabled after these tests, then they are different
> issues. This patch specifically addressed the delete protection issue and
> inability to edit afterwards. Please confirm. If so we may treat these
> issues separately.

Yes, the edit button for the pool is enabled after the tests in comment #12.

Comment 16 Aram Agajanian 2019-06-04 00:21:09 UTC
(In reply to Aram Agajanian from comment #15)
> (In reply to Steven Rosenberg from comment #13)
> > 
> > If the edit button is enabled after these tests, then they are different
> > issues. This patch specifically addressed the delete protection issue and
> > inability to edit afterwards. Please confirm. If so we may treat these
> > issues separately.
> 
> Yes, the edit button for the pool is enabled after the tests in comment #12.

After Test 2 and then shutting down the 3 running VMs in the pool, the Edit button is disabled.  

I have also found that some of the results of these tests today are different from the results on Friday.

Comment 19 Steven Rosenberg 2019-06-04 16:39:43 UTC
Created attachment 1577199 [details]
Test1 - VMs still on Pool

I performed Test 1 as described in the following comment:

https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c12

Creating the sub version in Item 3 did not detach the VMs from the pool as one can see in the screen shot, the VMS still have the Pool icons in the second column (and display Desktop pool on the tooltips).

Please provide the version the reported issue was simulated on. I am using what will be 4.4.

Comment 20 Steven Rosenberg 2019-06-04 16:41:18 UTC
Created attachment 1577200 [details]
Test1 - A Template and its subversion

As per previous comment

Comment 21 Steven Rosenberg 2019-06-04 16:42:10 UTC
Created attachment 1577201 [details]
Test1 - Pool still has 5 VMs

As per previous comment

Comment 22 Steven Rosenberg 2019-06-04 16:47:04 UTC
Please review comments 19-21 and the screen shots showing that after performing Test1, the VMs are still attached to the Pool.

It is assumed that Test2 is dependent upon Test1, but I was not able to remove the Template subversion because it was being used by the VMs.

One additional step was I first needed to create a VM in order to make a template from it. Please advise if your procedure was different.

Again please provide version information, host just on the engine, but what version your host is running.

Comment 23 Aram Agajanian 2019-06-05 00:25:55 UTC
(In reply to Steven Rosenberg from comment #22)
> Please review comments 19-21 and the screen shots showing that after
> performing Test1, the VMs are still attached to the Pool.
> 
> It is assumed that Test2 is dependent upon Test1, but I was not able to
> remove the Template subversion because it was being used by the VMs.

When I tested last week, the VMs weren't removed and re-created so they weren't using the Template subversion.

> 
> One additional step was I first needed to create a VM in order to make a
> template from it. Please advise if your procedure was different.

I had an existing VM that I used.

> 
> Again please provide version information, host just on the engine, but what
> version your host is running.

rhvm-4.2.8.2-0.1.el7ev.noarch

One host (host02) seems to be running older versions than the others.  However, when I run "Check for Upgrade" for that host there are "no updates found".

[root@host01 ~]# rpm -qa ovirt\* | sort
ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch
ovirt-ansible-repositories-1.1.5-1.el7ev.noarch
ovirt-host-4.3.2-1.el7ev.x86_64
ovirt-host-dependencies-4.3.2-1.el7ev.x86_64
ovirt-host-deploy-common-1.8.0-1.el7ev.noarch
ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch
ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch
ovirt-imageio-common-1.5.1-0.el7ev.x86_64
ovirt-imageio-daemon-1.5.1-0.el7ev.noarch
ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch
ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch
ovirt-vmconsole-1.0.7-1.el7ev.noarch
ovirt-vmconsole-host-1.0.7-1.el7ev.noarch


[root@host02 ~]# rpm -qa ovirt\* | sort
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
ovirt-host-4.2.3-1.el7ev.x86_64
ovirt-host-dependencies-4.2.3-1.el7ev.x86_64
ovirt-host-deploy-1.7.4-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.34-1.el7ev.noarch
ovirt-imageio-common-1.4.5-0.el7ev.x86_64
ovirt-imageio-daemon-1.4.5-0.el7ev.noarch
ovirt-node-ng-nodectl-4.2.0-0.20170814.0.el7.noarch
ovirt-provider-ovn-driver-1.2.17-1.el7ev.noarch
ovirt-setup-lib-1.1.5-1.el7ev.noarch
ovirt-vmconsole-1.0.7-1.el7ev.noarch
ovirt-vmconsole-host-1.0.7-1.el7ev.noarch


[root@host03 ~]# rpm -qa ovirt\* | sort
ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch
ovirt-ansible-repositories-1.1.5-1.el7ev.noarch
ovirt-host-4.3.2-1.el7ev.x86_64
ovirt-host-dependencies-4.3.2-1.el7ev.x86_64
ovirt-host-deploy-common-1.8.0-1.el7ev.noarch
ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch
ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch
ovirt-imageio-common-1.5.1-0.el7ev.x86_64
ovirt-imageio-daemon-1.5.1-0.el7ev.noarch
ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch
ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch
ovirt-vmconsole-1.0.7-1.el7ev.noarch
ovirt-vmconsole-host-1.0.7-1.el7ev.noarch

Comment 24 Steven Rosenberg 2019-06-05 07:59:20 UTC
(In reply to Aram Agajanian from comment #23)
> (In reply to Steven Rosenberg from comment #22)
> > Please review comments 19-21 and the screen shots showing that after
> > performing Test1, the VMs are still attached to the Pool.
> > 
> > It is assumed that Test2 is dependent upon Test1, but I was not able to
> > remove the Template subversion because it was being used by the VMs.
> 
> When I tested last week, the VMs weren't removed and re-created so they
> weren't using the Template subversion.
> 
> > 
> > One additional step was I first needed to create a VM in order to make a
> > template from it. Please advise if your procedure was different.
> 
> I had an existing VM that I used.
> 
> > 
> > Again please provide version information, host just on the engine, but what
> > version your host is running.
> 
> rhvm-4.2.8.2-0.1.el7ev.noarch
> 
> One host (host02) seems to be running older versions than the others. 
> However, when I run "Check for Upgrade" for that host there are "no updates
> found".
> 
> [root@host01 ~]# rpm -qa ovirt\* | sort
> ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch
> ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch
> ovirt-ansible-repositories-1.1.5-1.el7ev.noarch
> ovirt-host-4.3.2-1.el7ev.x86_64
> ovirt-host-dependencies-4.3.2-1.el7ev.x86_64
> ovirt-host-deploy-common-1.8.0-1.el7ev.noarch
> ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch
> ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch
> ovirt-imageio-common-1.5.1-0.el7ev.x86_64
> ovirt-imageio-daemon-1.5.1-0.el7ev.noarch
> ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch
> ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch
> ovirt-vmconsole-1.0.7-1.el7ev.noarch
> ovirt-vmconsole-host-1.0.7-1.el7ev.noarch
> 
> 
> [root@host02 ~]# rpm -qa ovirt\* | sort
> ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
> ovirt-host-4.2.3-1.el7ev.x86_64
> ovirt-host-dependencies-4.2.3-1.el7ev.x86_64
> ovirt-host-deploy-1.7.4-1.el7ev.noarch
> ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch
> ovirt-hosted-engine-setup-2.2.34-1.el7ev.noarch
> ovirt-imageio-common-1.4.5-0.el7ev.x86_64
> ovirt-imageio-daemon-1.4.5-0.el7ev.noarch
> ovirt-node-ng-nodectl-4.2.0-0.20170814.0.el7.noarch
> ovirt-provider-ovn-driver-1.2.17-1.el7ev.noarch
> ovirt-setup-lib-1.1.5-1.el7ev.noarch
> ovirt-vmconsole-1.0.7-1.el7ev.noarch
> ovirt-vmconsole-host-1.0.7-1.el7ev.noarch
> 
> 
> [root@host03 ~]# rpm -qa ovirt\* | sort
> ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch
> ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch
> ovirt-ansible-repositories-1.1.5-1.el7ev.noarch
> ovirt-host-4.3.2-1.el7ev.x86_64
> ovirt-host-dependencies-4.3.2-1.el7ev.x86_64
> ovirt-host-deploy-common-1.8.0-1.el7ev.noarch
> ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch
> ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch
> ovirt-imageio-common-1.5.1-0.el7ev.x86_64
> ovirt-imageio-daemon-1.5.1-0.el7ev.noarch
> ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch
> ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch
> ovirt-vmconsole-1.0.7-1.el7ev.noarch
> ovirt-vmconsole-host-1.0.7-1.el7ev.noarch

Please test the scenarios in Comment 12 with the versions in comment 23 and see if we can simulate Aram's scenarios. This way we can decide if an upgrade is recommended or not and if so to which version. It seems to work fine on the master branch.

Comment 26 Polina 2019-06-06 11:54:06 UTC
Hi Steven,

I reproduced the scenario and got the result described in step4  https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 ("Error while executing action Edit VM Pool properties: Internal Engine Error")
for engine version  4.2.8.7-0.1.el7ev.

Then I tested the same scenario in D/S ovirt-engine-4.3.4.1-0.1.el7.noarch and for step4 https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 I see that there is no error in the engine, but I'm not sure that the behavior is completely correct:
I can edit the pool only once and replace the template there to the subversion without no error. Though the pool vms are detached from the pool after this (Stateless is checked for them) and the pool could not be edited again. only Remove option is available for it.

The same happens on master ovirt-engine-4.4.0-0.0.master.20190509133331.gitb9d2a1e.el7.noarch:
Created a pool with 5 VMs based on Delete Protected Template, edit the pool replacing the template to the base version. As result, all 5 VMs are detached from the Pool. The pool could only be removed, no more editing available.

Comment 27 Polina 2019-06-06 12:02:16 UTC
small correction:  "edit the pool replacing the template to the subversion version." (instead of to the base version)

Comment 28 Steven Rosenberg 2019-06-06 15:01:55 UTC
(In reply to Polina from comment #26)
> Hi Steven,
> 
> I reproduced the scenario and got the result described in step4 
> https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 ("Error while
> executing action Edit VM Pool properties: Internal Engine Error")
> for engine version  4.2.8.7-0.1.el7ev.
> 
> Then I tested the same scenario in D/S ovirt-engine-4.3.4.1-0.1.el7.noarch
> and for step4 https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 I see
> that there is no error in the engine, but I'm not sure that the behavior is
> completely correct:
> I can edit the pool only once and replace the template there to the
> subversion without no error. Though the pool vms are detached from the pool
> after this (Stateless is checked for them) and the pool could not be edited
> again. only Remove option is available for it.
> 
> The same happens on master
> ovirt-engine-4.4.0-0.0.master.20190509133331.gitb9d2a1e.el7.noarch:
> Created a pool with 5 VMs based on Delete Protected Template, edit the pool
> replacing the template to the base version. As result, all 5 VMs are
> detached from the Pool. The pool could only be removed, no more editing
> available.

The delete protection fix was not back ported so you should also test this on the Master Branch. Also please check Test 1 without delete protected enabled as well.

Thank you Polina.

Comment 29 Polina 2019-08-12 08:08:14 UTC
verified_upstream on ovirt-engine-4.4.0-0.0.master.20190803234902.gitc1df3db.el7.noarch

both scenarios - with DeleteProtection and without.

For DeleteProtection option it is now allowed to edit pool from template to subversion: "Cannot change the VM Template when the VMs created are set to Delete Protected."

Comment 30 Daniel Gur 2019-08-28 13:13:36 UTC
sync2jira

Comment 31 Daniel Gur 2019-08-28 13:17:49 UTC
sync2jira

Comment 32 Casper (RHV QE bot) 2019-12-13 12:05:57 UTC
QE verification bot: the bug was verified upstream

Comment 33 RHV bug bot 2019-12-13 13:13:32 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 34 RHV bug bot 2019-12-20 17:43:31 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 35 RHV bug bot 2020-01-08 14:48:13 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 36 RHV bug bot 2020-01-08 15:14:11 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 37 RHV bug bot 2020-01-24 19:50:01 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 43 errata-xmlrpc 2020-08-04 13:16:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.