Description of problem: VM with shared disks does not contain information about the disks in its configuration in the OVF_STORE. Version-Release number of selected component (if applicable): RHV 4.4.3 How reproducible: 100% Steps to Reproduce: 1.Create VM with one shared disk 2.Update the OVF_STORE of the related SD. 3.Extract the ovf_store with tar and examine the VM_UUID.ovf file Actual results: The shared disks are not part of the configuration Expected results: The shared disks are part of the configuration Additional info: The lack of configuration prevents automated disaster recover fail-over as the shared disks mas to be manually imported and attached to the VMs. The shared disks are not included in the configuration as the ovf generated excludes all disks which are not snapshotable.
*** Bug 1956238 has been marked as a duplicate of this bug. ***
This issue affects import from storage domain, so we don't have to verity the DR functionality
We are past 4.5.0 feature freeze, please re-target.
Testing instructions - some tips, I'm sure you know better :) Environment: 1) 2 (or more) VMs sharing the same shareable disk. 2) A few non-shared disks inside the above VMs. 3) Other VMs with not shareable disks. 4) A few non-attached disks. Testing: 1) Optional checks - not sure whether QE performs such a testing. a. If you check Database tables as part of testing, you may verify that the "vm_ovf_generations" DB table's "ovf_data" column has XML (OVF configuration) that contains the *shared* disks. Previously only *not shared* disks were included in the XML. b. If you test OVF_STORE located inside the storage file system, you can extract the OVF_STORE with tar and examine the VM_<UUID>.ovf file to see the XML inside it (should be the same as in the database - "vm_ovf_generations" DB table's "ovf_data" column). 2) Basic scenarios: a. Put the Storage Domain in maintenance mode & then detach it. b. Attach the above Storage Domain to another Data Center (actually this bug started as disaster recovery scenario, so ideally need to have another fail-over environment). c. Verify that you can import VMs and the VMs correctly include both shared and non-shared disks. Verify that the shared disks are correctly shared by the VMs as it was originally (and that sharing is displayed both from the general "Disks" menu and from the specific VMs' "Disks" tab). d. Obviously no errors should be produced when you import VM2 after importing VM1 when both VMs share the same disk(s). 3) Advanced scenarios: Do the above a few times back and forth, while also adding more VMs and/or shareable/non-shareable disks to the existing VMs. Adding a new VM3 *between* importing VM1 & VM2 and making VM3 also use the same shared disk, etc. 4) Negative flows: a. Make disk non-shareable after importing VM1 & try to import VM2. Make it back shareable and then re-try the import. 5) Very-very negative flows: a. What happens if after importing VM1 (with the shared disk) the VM1 or the shared disk is removed. And then VM2 is imported. What happens? Is it a scenario that makes sense at all from the user PoV??? Should this be handled???
Verified successfully. Versions: rhv-4.5.0-7 ovirt-engine-4.5.0.4-0.1 vdsm-4.50.0.13-1 Verified according to the provided steps in comment 19. According to the question about the negative flow: - If we remove the VM1 then the disk is still shared and it is attached only to VM2, tried to attach to another new VM without any issue. - If we delete the shared disk - then the disk will be detached from vm2 as well, but now in this state if we try to import vm3 that was using a shared disk it won't succeed because it is configured in the XML file that the VM3 has another shared disk (that we removed) "Cannot import VM. VM's Image does not exist. In order to import partial VM, select 'allow partial' checkbox." Cannot import VM. VM's Image does not exist. Partial import will cause the VM to register even if disks are missing or already exist. If we choose the Partial import the VM will be imported successfully without the missing disk
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.0] security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4711