Description of problem: As described in BZ1543218, in RHV 4.1 the engine allowed: 1. Import HE Disks from the HE SD 2. Delete/Move them = HE setup is broken Now that BZ is closed with errata. I can see the disks are now named and they show up in a fresh installation automatically. That's good. But this means step 1 is not necessary anymore in RHV 4.2, one can go straight to step 2 and "manage" these disks, breaking the setup is even easier than before. Version-Release number of selected component (if applicable): ovirt-engine-4.2.4.5-1.el7.noarch How reproducible: Always Steps to Reproduce: 1. Fresh Install 2. Delete HE volumes Actual results: HE is broken Expected results: Do not allow the user to manage these images. Additional info: 2018-07-13 13:40:48,394+10 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-740) [b02d43d9-1641-431a-96e5-892de8c10217] EVENT_ID: USER_FINISHED_REMOVE_DISK(2,014), Disk he_sanlock was successfully removed from domain hosted_storage (User admin@internal-authz). 2018-07-13 13:41:15,440+10 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-750) [4d7a475d-e03e-45e2-8609-67433c833f54] EVENT_ID: USER_FINISHED_REMOVE_DISK(2,014), Disk HostedEngineConfigurationImage was successfully removed from domain hosted_storage (User admin@internal-authz). 2018-07-13 13:47:37,684+10 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-855) [841e241c-3890-4c56-97be-3569049325e6] EVENT_ID: USER_FINISHED_REMOVE_DISK(2,014), Disk he_metadata was successfully removed from domain hosted_storage (User admin@internal-authz).
It is very unlikely that the user would delete these disks due. Let's try to add some validation for the next release.
Has BZ #1543218 solved this?
(In reply to Yaniv Lavi from comment #5) > Has BZ #1543218 solved this? No. As explained in comment #0, BZ1543218 was closed with errata but IHMO it does not fix the problem. The fix was partial at best, I dont understand why that BZ was closed.
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Simone, is there any way we can block this? I'm not a big fan of doing it according to the disk alias
(In reply to Tal Nisan from comment #11) > Simone, is there any way we can block this? I'm not a big fan of doing it > according to the disk alias For the same reason on the engine VM itself we are using a special value (6) in the origin field at DB level. I don't know if there is any field on tables that we can abuse in the same way to flag the disks as specials without side effects.
We shall solve this by using a new content type of hosted engine disk and as precaution we'll also check the disk alias to comply with older versions.
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Project 'vdsm'/Component 'ovirt-engine' mismatch] For more info please contact: rhv-devops
I tried to delete HE-VM's disks, one by one and got: "Operation Canceled Error while executing action: Cannot remove Virtual Disk. The disk is a part of Hosted Engine." Works for me on these components: ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch rhvm-appliance-4.3-20190328.1.el7.x86_64 Linux 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.6 (Maipo) Tested on RHEL hosts. Moving to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1085