Description of problem: Cannot edit a pool that was created from a delete protected template(based on a delete protected VM). "Error while executing action Edit VM Pool properties: Internal Engine Error" Version-Release number of selected component (if applicable): 4.2.7 How reproducible: 100% Steps to Reproduce: Step 1) Create new template Test1_T from existing VM with Delete Protection enabled. Note that Delete Protection is enabled for template Test1_T. Step 2) Create new pool TestPool from Template Test1_T (base version) with Delete Protection enabled. Configure Number of VMs=5. Note that Delete Protection setting for TestPool is inherited from the Template. Leave it enabled. Note that all of the VMs in the TestPool have Delete Protection enabled. (I believe that this is inherited from TestPool.) Step 3) Create new template with Root Template Test1_T and Sub-Version Name 20181212-2. Note that the new template Sub-Version has Delete Protection Enabled. Step 4) Edit TestPool. Change the Template to Test1_T Subversion 20181212-2. An "Operation Canceled" window pops up. The message is "Error while executing action Edit VM Pool properties: Internal Engine Error". After that, TestPool has 0 Assigned VMs. TestPool can no longer be edited but it can be removed. The VMs that used to be part of TestPool (TestPool-1, TestPool-2, ...) still exist. You can manually disable Delete Protection and then delete these VMs. Actual results: Unable to update the pool. Expected results: Be able to update/edit the pool. Additional info: Work around this issue by remembering to disable Delete Protection on the Template after it's creation is completed.
I Reviewed this issue. The edit button is disabled because the number of VMs became 0 and the vm_pool_map entries do not exist. The solutions may be: 1. To prevent the ability to change the template with diagnostics when Delete Protection is on because the current VMs cannot be removed. 2. Preserve the number of VMs (5 in the above scenario) by creating a new set of VMs for the new template / pool combination. It means we will have 10 VMs in this scenario. Please advise which option is worth the effort.
(In reply to Steven Rosenberg from comment #3) > > The solutions may be: > > 1. To prevent the ability to change the template with diagnostics when > Delete Protection is on because the current VMs cannot be removed. > > 2. Preserve the number of VMs (5 in the above scenario) by creating a new > set of VMs for the new template / pool combination. It means we will have 10 > VMs in this scenario. > > Please advise which option is worth the effort. From a customer (university VDI) perspective: Solution 1 would be an improvement but it seems that when the pool's Template Sub-Version is set to latest you would also have to add diagnostics when a Template creation is attempted. Solution 2 doesn't make sense for us because our pools are stateless. There is no reason to preserve the old stateless VMs since they get reset when they are shutdown. The Delete Protection setting for templates seems to be unique because it serves two purposes. It protects the Template but it also gets inherited by the VMs in the pool. Perhaps, there could be two settings for templates: "Delete Protection" and (inherited) "VM Delete Protection". Allow the user to disable the VM Delete Protection setting in the New Template window.
(In reply to Aram Agajanian from comment #4) > (In reply to Steven Rosenberg from comment #3) > > > > The solutions may be: > > > > 1. To prevent the ability to change the template with diagnostics when > > Delete Protection is on because the current VMs cannot be removed. > > > > 2. Preserve the number of VMs (5 in the above scenario) by creating a new > > set of VMs for the new template / pool combination. It means we will have 10 > > VMs in this scenario. > > > > Please advise which option is worth the effort. > > > From a customer (university VDI) perspective: > > Solution 1 would be an improvement but it seems that when the pool's > Template Sub-Version is set to latest you would also have to add diagnostics > when a Template creation is attempted. > > Solution 2 doesn't make sense for us because our pools are stateless. There > is no reason to preserve the old stateless VMs since they get reset when > they are shutdown. > > The Delete Protection setting for templates seems to be unique because it > serves two purposes. It protects the Template but it also gets inherited by > the VMs in the pool. > > Perhaps, there could be two settings for templates: "Delete Protection" and > (inherited) "VM Delete Protection". Allow the user to disable the VM Delete > Protection setting in the New Template window. Dear Aram, Thank you for your response. As I understand Solution 1 is preferred. What would you like the diagnostics to warn about concerning the Template sub-Version / Delete Protection and please clarify that it would be a warning but we would still allow for the creation of the template?
(In reply to Steven Rosenberg from comment #5) > > Dear Aram, > > Thank you for your response. > > As I understand Solution 1 is preferred. What would you like the diagnostics > to warn about concerning the Template sub-Version / Delete Protection and > please clarify that it would be a warning but we would still allow for the > creation of the template? Hi Steven, I'm not sure why this would be a warning and not an error since using a template with delete protection enabled causes the pool to become unusable the next time that its Template setting is changed. I have come up with warning messages for the Edit Pool and New Template cases since that seems to be what you are asking for: 1) If the Pool is edited so that the Template setting is changed and the new template has Delete Protection enabled, this will cause the pool to break later on. A warning would be: "The specified Template (Name fghij, Sub-Version vwxyz) has Delete Protection enabled. If this Template is used, Pool abcde will also have Delete Protection enabled. A subsequent change to the Template setting of Pool abcde will cause the Pool to become unusable. It is strongly recommended that Delete Protection be disabled on Template fghij, Sub-version vxwyx before proceeding. If you do not wish to proceed, press the Cancel button." 2) Regarding the creation of a new Template Sub-Version with Delete Protection enabled when there is a Pool that uses the latest version of the same Template, a warning would be: "Pool abcde uses the latest version of Template fghij. If a new version of of Template fghij is created with Delete Protection enabled, Pool abcde will also have Delete Protection enabled. Any subsequent change to the Template setting of Pool abcde will cause the Pool to become unusable. If you do not wish to to proceed, press the Cancel button." If the Delete Protection setting were to be added to the New Template window, then the user could disable it at the time that the template is created.
Martin, please review and advise when you are available.
Sounds reasonable to me. We can do this in docs for 4.3 probably, and target an "in-UI change" for 4.4.
After some discussion, we implemented a solution where attempting to modify the Template when the Delete Protection is enabled will fail with an error. This avoids the problems with suggestion 1 where the user proceeds with the edit only to find one cannot edit the Pool again (because it is unstable) as well as suggestion 2 where the user does not heed the warning, creates the Sub-Template, modifies the Pool's Template and then finds that after changing the Template's Pool, the later is not editable and the VMs were not deleted. This limitation of Blocking the changing of the Template when they are Delete Protected prevents the instability.
An additional case related to this bug: - When a pool's Template subversion is set to "latest" and then a new subversion of the template is created.
Logs?
I uploaded the engine.log file to my Red Hat Support Case after performing the following tests on RHV 4.2.8: Test 1 ====== 1) Create a template named LinuxLab_T_latest_test_1 2) Create a pool named LinuxLab_latest_test template = LinuxLab_T_latest_test_1 template subversion = latest number of VMs = 5 number of running VMs = 0 3) Create a subversion template of LinuxLab_T_latest_test_1 Results of Test 1 ================= The following Pool VMs were automatically detached from the pool: LinuxLab_latest_test-2 LinuxLab_latest_test-3 LinuxLab_latest_test-4 LinuxLab_latest_test-5 The final result was a pool with one VM, LinuxLab_latest_test-1 Test 2 ====== 1) Delete previous subversion of template LinuxLab_T_latest_test_1 2) Detach remaining VM from LinuxLab_latest_test pool and delete all VMs. 3) Create a new pool named LinuxLab_latest_test template = LinuxLab_T_latest_test_1 template subversion = latest number of VMs = 5 number of running VMs = 3 4) Create a new subversion template of LinuxLab_T_latest_test_1 Results of Test 2 ================= Nothing happens when the creation of the subversion template completes. The final result is a LinuxLab_latest_test pool with 5 VMs (3 running), all of which are using the original template (LinuxLab_T_latest_test_1) not the subversion.
(In reply to Aram Agajanian from comment #12) > I uploaded the engine.log file to my Red Hat Support Case after performing > the following tests on RHV 4.2.8: > > Test 1 > ====== > 1) Create a template named LinuxLab_T_latest_test_1 > 2) Create a pool named LinuxLab_latest_test > template = LinuxLab_T_latest_test_1 > template subversion = latest > number of VMs = 5 > number of running VMs = 0 > 3) Create a subversion template of LinuxLab_T_latest_test_1 > > Results of Test 1 > ================= > The following Pool VMs were automatically detached from the pool: > LinuxLab_latest_test-2 > LinuxLab_latest_test-3 > LinuxLab_latest_test-4 > LinuxLab_latest_test-5 > The final result was a pool with one VM, LinuxLab_latest_test-1 > > Test 2 > ====== > 1) Delete previous subversion of template LinuxLab_T_latest_test_1 > 2) Detach remaining VM from LinuxLab_latest_test pool and delete all VMs. > 3) Create a new pool named LinuxLab_latest_test > template = LinuxLab_T_latest_test_1 > template subversion = latest > number of VMs = 5 > number of running VMs = 3 > 4) Create a new subversion template of LinuxLab_T_latest_test_1 > > Results of Test 2 > ================= > Nothing happens when the creation of the subversion template completes. > The final result is a LinuxLab_latest_test pool with 5 VMs (3 running), all > of which are using the original template (LinuxLab_T_latest_test_1) not the > subversion. If the edit button is enabled after these tests, then they are different issues. This patch specifically addressed the delete protection issue and inability to edit afterwards. Please confirm. If so we may treat these issues separately.
(In reply to Steven Rosenberg from comment #13) > > If the edit button is enabled after these tests, then they are different > issues. This patch specifically addressed the delete protection issue and > inability to edit afterwards. Please confirm. If so we may treat these > issues separately. Yes, the edit button for the pool is enabled after the tests in comment #12.
(In reply to Aram Agajanian from comment #15) > (In reply to Steven Rosenberg from comment #13) > > > > If the edit button is enabled after these tests, then they are different > > issues. This patch specifically addressed the delete protection issue and > > inability to edit afterwards. Please confirm. If so we may treat these > > issues separately. > > Yes, the edit button for the pool is enabled after the tests in comment #12. After Test 2 and then shutting down the 3 running VMs in the pool, the Edit button is disabled. I have also found that some of the results of these tests today are different from the results on Friday.
Created attachment 1577199 [details] Test1 - VMs still on Pool I performed Test 1 as described in the following comment: https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c12 Creating the sub version in Item 3 did not detach the VMs from the pool as one can see in the screen shot, the VMS still have the Pool icons in the second column (and display Desktop pool on the tooltips). Please provide the version the reported issue was simulated on. I am using what will be 4.4.
Created attachment 1577200 [details] Test1 - A Template and its subversion As per previous comment
Created attachment 1577201 [details] Test1 - Pool still has 5 VMs As per previous comment
Please review comments 19-21 and the screen shots showing that after performing Test1, the VMs are still attached to the Pool. It is assumed that Test2 is dependent upon Test1, but I was not able to remove the Template subversion because it was being used by the VMs. One additional step was I first needed to create a VM in order to make a template from it. Please advise if your procedure was different. Again please provide version information, host just on the engine, but what version your host is running.
(In reply to Steven Rosenberg from comment #22) > Please review comments 19-21 and the screen shots showing that after > performing Test1, the VMs are still attached to the Pool. > > It is assumed that Test2 is dependent upon Test1, but I was not able to > remove the Template subversion because it was being used by the VMs. When I tested last week, the VMs weren't removed and re-created so they weren't using the Template subversion. > > One additional step was I first needed to create a VM in order to make a > template from it. Please advise if your procedure was different. I had an existing VM that I used. > > Again please provide version information, host just on the engine, but what > version your host is running. rhvm-4.2.8.2-0.1.el7ev.noarch One host (host02) seems to be running older versions than the others. However, when I run "Check for Upgrade" for that host there are "no updates found". [root@host01 ~]# rpm -qa ovirt\* | sort ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch ovirt-ansible-repositories-1.1.5-1.el7ev.noarch ovirt-host-4.3.2-1.el7ev.x86_64 ovirt-host-dependencies-4.3.2-1.el7ev.x86_64 ovirt-host-deploy-common-1.8.0-1.el7ev.noarch ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch ovirt-imageio-common-1.5.1-0.el7ev.x86_64 ovirt-imageio-daemon-1.5.1-0.el7ev.noarch ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch ovirt-vmconsole-1.0.7-1.el7ev.noarch ovirt-vmconsole-host-1.0.7-1.el7ev.noarch [root@host02 ~]# rpm -qa ovirt\* | sort ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch ovirt-host-4.2.3-1.el7ev.x86_64 ovirt-host-dependencies-4.2.3-1.el7ev.x86_64 ovirt-host-deploy-1.7.4-1.el7ev.noarch ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch ovirt-hosted-engine-setup-2.2.34-1.el7ev.noarch ovirt-imageio-common-1.4.5-0.el7ev.x86_64 ovirt-imageio-daemon-1.4.5-0.el7ev.noarch ovirt-node-ng-nodectl-4.2.0-0.20170814.0.el7.noarch ovirt-provider-ovn-driver-1.2.17-1.el7ev.noarch ovirt-setup-lib-1.1.5-1.el7ev.noarch ovirt-vmconsole-1.0.7-1.el7ev.noarch ovirt-vmconsole-host-1.0.7-1.el7ev.noarch [root@host03 ~]# rpm -qa ovirt\* | sort ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch ovirt-ansible-repositories-1.1.5-1.el7ev.noarch ovirt-host-4.3.2-1.el7ev.x86_64 ovirt-host-dependencies-4.3.2-1.el7ev.x86_64 ovirt-host-deploy-common-1.8.0-1.el7ev.noarch ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch ovirt-imageio-common-1.5.1-0.el7ev.x86_64 ovirt-imageio-daemon-1.5.1-0.el7ev.noarch ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch ovirt-vmconsole-1.0.7-1.el7ev.noarch ovirt-vmconsole-host-1.0.7-1.el7ev.noarch
(In reply to Aram Agajanian from comment #23) > (In reply to Steven Rosenberg from comment #22) > > Please review comments 19-21 and the screen shots showing that after > > performing Test1, the VMs are still attached to the Pool. > > > > It is assumed that Test2 is dependent upon Test1, but I was not able to > > remove the Template subversion because it was being used by the VMs. > > When I tested last week, the VMs weren't removed and re-created so they > weren't using the Template subversion. > > > > > One additional step was I first needed to create a VM in order to make a > > template from it. Please advise if your procedure was different. > > I had an existing VM that I used. > > > > > Again please provide version information, host just on the engine, but what > > version your host is running. > > rhvm-4.2.8.2-0.1.el7ev.noarch > > One host (host02) seems to be running older versions than the others. > However, when I run "Check for Upgrade" for that host there are "no updates > found". > > [root@host01 ~]# rpm -qa ovirt\* | sort > ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch > ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch > ovirt-ansible-repositories-1.1.5-1.el7ev.noarch > ovirt-host-4.3.2-1.el7ev.x86_64 > ovirt-host-dependencies-4.3.2-1.el7ev.x86_64 > ovirt-host-deploy-common-1.8.0-1.el7ev.noarch > ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch > ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch > ovirt-imageio-common-1.5.1-0.el7ev.x86_64 > ovirt-imageio-daemon-1.5.1-0.el7ev.noarch > ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch > ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch > ovirt-vmconsole-1.0.7-1.el7ev.noarch > ovirt-vmconsole-host-1.0.7-1.el7ev.noarch > > > [root@host02 ~]# rpm -qa ovirt\* | sort > ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch > ovirt-host-4.2.3-1.el7ev.x86_64 > ovirt-host-dependencies-4.2.3-1.el7ev.x86_64 > ovirt-host-deploy-1.7.4-1.el7ev.noarch > ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch > ovirt-hosted-engine-setup-2.2.34-1.el7ev.noarch > ovirt-imageio-common-1.4.5-0.el7ev.x86_64 > ovirt-imageio-daemon-1.4.5-0.el7ev.noarch > ovirt-node-ng-nodectl-4.2.0-0.20170814.0.el7.noarch > ovirt-provider-ovn-driver-1.2.17-1.el7ev.noarch > ovirt-setup-lib-1.1.5-1.el7ev.noarch > ovirt-vmconsole-1.0.7-1.el7ev.noarch > ovirt-vmconsole-host-1.0.7-1.el7ev.noarch > > > [root@host03 ~]# rpm -qa ovirt\* | sort > ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch > ovirt-ansible-hosted-engine-setup-1.0.17-1.el7ev.noarch > ovirt-ansible-repositories-1.1.5-1.el7ev.noarch > ovirt-host-4.3.2-1.el7ev.x86_64 > ovirt-host-dependencies-4.3.2-1.el7ev.x86_64 > ovirt-host-deploy-common-1.8.0-1.el7ev.noarch > ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch > ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch > ovirt-imageio-common-1.5.1-0.el7ev.x86_64 > ovirt-imageio-daemon-1.5.1-0.el7ev.noarch > ovirt-node-ng-nodectl-4.3.0-0.20181213.0.el7ev.noarch > ovirt-provider-ovn-driver-1.2.20-1.el7ev.noarch > ovirt-vmconsole-1.0.7-1.el7ev.noarch > ovirt-vmconsole-host-1.0.7-1.el7ev.noarch Please test the scenarios in Comment 12 with the versions in comment 23 and see if we can simulate Aram's scenarios. This way we can decide if an upgrade is recommended or not and if so to which version. It seems to work fine on the master branch.
Hi Steven, I reproduced the scenario and got the result described in step4 https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 ("Error while executing action Edit VM Pool properties: Internal Engine Error") for engine version 4.2.8.7-0.1.el7ev. Then I tested the same scenario in D/S ovirt-engine-4.3.4.1-0.1.el7.noarch and for step4 https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 I see that there is no error in the engine, but I'm not sure that the behavior is completely correct: I can edit the pool only once and replace the template there to the subversion without no error. Though the pool vms are detached from the pool after this (Stateless is checked for them) and the pool could not be edited again. only Remove option is available for it. The same happens on master ovirt-engine-4.4.0-0.0.master.20190509133331.gitb9d2a1e.el7.noarch: Created a pool with 5 VMs based on Delete Protected Template, edit the pool replacing the template to the base version. As result, all 5 VMs are detached from the Pool. The pool could only be removed, no more editing available.
small correction: "edit the pool replacing the template to the subversion version." (instead of to the base version)
(In reply to Polina from comment #26) > Hi Steven, > > I reproduced the scenario and got the result described in step4 > https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 ("Error while > executing action Edit VM Pool properties: Internal Engine Error") > for engine version 4.2.8.7-0.1.el7ev. > > Then I tested the same scenario in D/S ovirt-engine-4.3.4.1-0.1.el7.noarch > and for step4 https://bugzilla.redhat.com/show_bug.cgi?id=1659161#c0 I see > that there is no error in the engine, but I'm not sure that the behavior is > completely correct: > I can edit the pool only once and replace the template there to the > subversion without no error. Though the pool vms are detached from the pool > after this (Stateless is checked for them) and the pool could not be edited > again. only Remove option is available for it. > > The same happens on master > ovirt-engine-4.4.0-0.0.master.20190509133331.gitb9d2a1e.el7.noarch: > Created a pool with 5 VMs based on Delete Protected Template, edit the pool > replacing the template to the base version. As result, all 5 VMs are > detached from the Pool. The pool could only be removed, no more editing > available. The delete protection fix was not back ported so you should also test this on the Master Branch. Also please check Test 1 without delete protected enabled as well. Thank you Polina.
verified_upstream on ovirt-engine-4.4.0-0.0.master.20190803234902.gitc1df3db.el7.noarch both scenarios - with DeleteProtection and without. For DeleteProtection option it is now allowed to edit pool from template to subversion: "Cannot change the VM Template when the VMs created are set to Delete Protected."
sync2jira
QE verification bot: the bug was verified upstream
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247