Bug 2028242
| Summary: | VM previously imported as OVA is missing after detach/import of Data Storage Domain | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | sde <sebastien.daubigne> |
| Component: | ovirt-engine | Assignee: | Artiom Divak <adivak> |
| Status: | CLOSED ERRATA | QA Contact: | Shir Fishbain <sfishbai> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.3.6 | CC: | adivak, ahadas, bugs, emarcus, eshames, sfishbai |
| Target Milestone: | ovirt-4.5.3-async | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ovirt-engine-4.5.3.4 | Doc Type: | Bug Fix |
| Doc Text: |
Previously, VMs that were imported from OVAs that were created by other systems (not oVirt/RHV) were not stored in the OVF store on the storage domain. AS a result, when the storage domain was detached, these VMs could not be imported back from the storage domain.
In this release, VMs that are imported from OVA files/directories created by another system are also stored in the OVF store on the storage domain and can be imported from the storage domain when it is attached to a data center.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-11-30 07:53:04 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
sde
2021-12-01 19:16:38 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again. Tried this flow on rhv-4.4.10 and VM1 appeared in the Storage Domain "VM Import" Tab. (In reply to Evelina Shames from comment #3) > Tried this flow on rhv-4.4.10 and VM1 appeared in the Storage Domain "VM > Import" Tab. Thank you for testing on 4.4. I'm using ovirt 4.3.10 (embedded in Oracle OLVM 4.3). Which type of Storage Domain do you use ? (I use a FC Data Storage Domain) I don't think it is related to the type of the storage domain If you don't see that VM in the "Import VM" tab but you see other VMs, it means that the VM definition was not written to the OVFSTORE on that storage domain How did you import the VM from the OVA, via the webadmin or virt-v2v from the command line? (In reply to Arik from comment #5) > How did you import the VM from the OVA, via the webadmin or virt-v2v from > the command line? The VM was imported via the GUI (Import VM => OVA) (In reply to sde from comment #6) > (In reply to Arik from comment #5) > > > How did you import the VM from the OVA, via the webadmin or virt-v2v from > > the command line? > > > The VM was imported via the GUI (Import VM => OVA) OK, that rules out the possibility that the disks were uploaded but the VM wasn't added yet when you detached the storage domain - the VM and the disks are added at the same time in this flow and if the disks were written to the OVFSTORE then the VM configuration should be there as well oVirt 4.3 is not supported for a long time and as Evelina found that it works on 4.4 there's not much we can do without having more information from your environmment.. (In reply to Arik from comment #7) > oVirt 4.3 is not supported for a long time and as Evelina found that it > works on 4.4 there's not much we can do without having more information from > your environmment.. I understand, thank you for assistance, I have opened an SR to Oracle support for analysis (OLVM currently only supports 4.3). Hello, A follow-up on this bug : after more investigation with Oracle support and OLVM dev team, we can confirm that there is a bug in UpdateOvf flow and the issue is reproductible in upstream Ovirt 4.4 and OLVM 4.4. The subtlety is that it only impacts OVA generated from some external (non ovirt) products, especially VMWare and also some OVM (Xen-based) and Virtualbox images. I have identified a publicly available OVA to test the issue. Steps to reproduce : Download this non-ovirt OVA (this one comes from Virtualbox, but any OVA generated from VMWare should also encounter the bug) : https://edelivery.oracle.com/osdc/faces/SoftwareDelivery Oracle Linux KVM Templates for Oracle Linux 1.0.0.0.0 for Linux x86-64 V988166-01.zip Oracle Linux 7 Update 7 Template for Oracle Linux KVM, 804.2 MB OLKVM_OL7U7_X86_64.ova 1. import VM1 from OVA file to a Data Storage domain 2. create VM2 on the same Data Storage domain 3. Maintenance/Detach/Remove Data Storage Domain from Data Center 4. Import/activate Data Storage Domain in the same Data Center 5. VM1 won't appear in the Storage Domain "VM Import" Tab => VM1 configuration is lost. Its disks are visibles with no alias/name in "Disk Import" Tab. 6. VM2 is visible the Storage Domain "VM Import" Tab => VM2 import successful Actual results: VM lost after Storage Domain Detach/Import Expected results: VM should be visible to import The bug in UpdateOvf flow results in OVF_STORE not being populated for the VM affected by this issue. Here is an Oracle action plan which fixes the entries missing in OVF_STORE for VM already imported : ==========ACTION PLAN=============== 1. Perform Engine DB backup (if not done already). Note: To open Engine DB . Ssh to manager and then run #su - postgres #scl enable rh-postgresql10 "psql -d engine -U postgres" OLVM: Backup And Restore The Oracle Linux Virtualization Manager Engine ( Doc ID 2532928.1 ) 2. To see the VMs imported from VMware from Engine DB: select vm_guid,vm_name, storage_pool_id, creation_date from public.vms where vm_guid not in (select vm_guid from vm_ovf_generations) ; 2. Create a recovery view to store Vmware VM info: CREATE TABLE recovery_table as select vm_guid,vm_name, storage_pool_id, creation_date from public.vms where vm_guid not in (select vm_guid from vm_ovf_generations) ; 3. Update the InsertOVFDataForEntities function in the engine DB., by running the following command : CREATE OR REPLACE FUNCTION InsertOVFDataForEntities ( v_entity_guid UUID, v_entity_name VARCHAR(255), v_entity_type VARCHAR(32), v_architecture INT, v_lowest_comp_version VARCHAR(40), v_storage_domain_id UUID, v_ovf_data TEXT, v_ovf_extra_data TEXT, v_status INTEGER ) RETURNS VOID AS $PROCEDURE$ BEGIN INSERT INTO unregistered_ovf_of_entities ( entity_guid, entity_name, entity_type, architecture, lowest_comp_version, storage_domain_id, ovf_extra_data, ovf_data, status ) VALUES ( v_entity_guid, v_entity_name, v_entity_type, v_architecture, v_lowest_comp_version, v_storage_domain_id, v_ovf_extra_data, v_ovf_data, v_status ); UPDATE unregistered_ovf_of_entities u SET ovf_data = vog.ovf_data FROM vm_ovf_generations vog WHERE vog.vm_guid = u.entity_guid AND u.entity_guid = v_entity_guid AND v_ovf_data IS NULL; END;$PROCEDURE$ LANGUAGE plpgsql; 4. Verify that the SQL function is updated: SELECT prosrc FROM pg_proc WHERE proname = 'insertovfdataforentities'; 5. Insert entries with ovf_data as null for all VMware origin VMs: INSERT INTO public.vm_ovf_generations select vm_guid, storage_pool_id , 1, null from public.vms where vm_guid not in (select vm_guid from vm_ovf_generations) ; 6. Edit all imported VMs to change the description of the VMs manually. Please refer to the output of step 2 to ensure all 46 VMs are edited. For example: Change Description to "Imported from VMware". 7. After manually updating the description for all the VMs. Please go to Storage >> Domains . Then run Update_OVF for all the Storage Domains (Important as this updates the OVF_ data). Make sure you run Update_OVFs for all the Storage Domains which have VMware VMs attached. 8. After that check if the ovf_data is populated for all the VMware vms in vm_ovf_generations table . So the output of the query below should have no rows after editing all the vms and running ovf_update. Select vms.vm_guid, vm_name from vms, vm_ovf_generations where vm_ovf_generations.vm_guid=vms.vm_guid and ovf_data is null and vms.vm_guid in (Select vm_guid from recovery_table); If the above query list some VMs then edit those VMs and run ovf_update again. 10. After the steps, once the VM's are visible and imported on the alternate site . Please Delete the recovery table (important) DROP table recovery_table; ============================== Lowering the severity because the origin type can be overridden if needed QE doesn't have the capacity to verify this bug during 4.5.1 (In reply to Arik from comment #10) > Lowering the severity because the origin type can be overridden if needed Actually it doesn't seem related to the origin type field, the ovf should be stored on the storage domain for all but external VMs, and then added as an unregistered entity Need to take a deeper look at this - it should be reproducible also with other OVAs that were not created by oVirt Trying to reproduce the bug.
While reproducing the bug I am getting an error while trying to import OLKVM_OL7U7_X86_64.ova file I get that error
{ "message": "libguestfs error: inspect_os: mount exited with status 32: mount: /tmp/btrfs1qqXXM: unknown filesystem type 'btrfs'.", "timestamp": "2022-09-08T10:23:01.026519876+03:00", "type": "error" }
This bug has low overall severity and is not going to be further verified by QE. If you believe special care is required, feel free to properly align relevant severity, flags and keywords to raise PM_Score or use one of the Bumps ('PrioBumpField', 'PrioBumpGSS', 'PrioBumpPM', 'PrioBumpQA') in Keywords to raise it's PM_Score above verification threashold (1000).
(In reply to Artiom Divak from comment #13) > Trying to reproduce the bug. > While reproducing the bug I am getting an error while trying to import > OLKVM_OL7U7_X86_64.ova file I get that error > { "message": "libguestfs error: inspect_os: mount exited with status 32: > mount: /tmp/btrfs1qqXXM: unknown filesystem type 'btrfs'.", "timestamp": > "2022-09-08T10:23:01.026519876+03:00", "type": "error" } We're not able to import that OVA to our environment, any hint on how you did that? (In reply to Arik from comment #15) > (In reply to Artiom Divak from comment #13) > > Trying to reproduce the bug. > > While reproducing the bug I am getting an error while trying to import > > OLKVM_OL7U7_X86_64.ova file I get that error > > { "message": "libguestfs error: inspect_os: mount exited with status 32: > > mount: /tmp/btrfs1qqXXM: unknown filesystem type 'btrfs'.", "timestamp": > > "2022-09-08T10:23:01.026519876+03:00", "type": "error" } > > We're not able to import that OVA to our environment, any hint on how you > did that? That's because (I suppose) you're running ovirt 4.4, based on RHEL8, which droped support for btrfs (not included in kernel). This Ova image includes a btrfs partition. I'm running OLVM 4.3, based on Oracle Linux 7 / ovirt 4.3, that's why it is able to import/mount the image with btrfs (it was a tech preview in RHEL7 / OL7). Anyway, I found and tested another image with the same issue, but without any btrs partition (only ext4), hence you should be able to import it : https://github.com/Virtual-Machines/Xubuntu-VirtualBox/releases/download/latest/XubuntuFocal.ova The fix for this bug didn't get to 4.5.3, moving to github: https://github.com/oVirt/ovirt-engine/issues/674 It turned out to be related to the origin type after all Converting this bug to RHV in order to backport the fix Verified by the following steps: 1. Download the image XubuntuFocal.ova 2. Import the image to oVirt/RHV 3. Set the SD to maintenance then detach and then remove the SD 4. Import back the storage domain and activate it 5. Import from the storage domain the VM The VM is visible to import from the File storage Domain after importing the storage domain. RHV build name: rhv-4.5.3-6 ovirt-engine-4.5.3.4-1.el8ev.noarch vdsm-4.50.3.5-1.el8ev.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV Manager (ovirt-engine) [ovirt-4.5.3-2] update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:8695 |