Bug 1478283
Summary: | [JNPE] Cannot import VM from previously used SD / ImportVmFromConfigurationCommand' failed: null | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Jiri Belka <jbelka> |
Component: | BLL.Storage | Assignee: | Maor <mlipchuk> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Kevin Alon Goldblatt <kgoldbla> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.1.4.2 | CC: | amureini, bugs, jbelka, lveyde, tnisan |
Target Milestone: | ovirt-4.1.6 | Flags: | rule-engine:
ovirt-4.1+
|
Target Release: | 4.1.6.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.1.6.2 | Doc Type: | Bug Fix |
Doc Text: |
Cause:
Try to import VM with memory disks which reside on storage domains which were not imported to the DC
Consequence:
The VM failed to be imported
Fix:
Add validation if all storage domains exists or not for memory disks.
In case the storage domain does not exists the engine will fail the operation unless the flag of 'partial_import' will be true, in that case the VM will be imported without the memory disks on the missing storage domains.
Result:
Import VM failed with a proper message.
Using 'partial_import' as true will make the operation succeed
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-09-19 10:02:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jiri Belka
2017-08-04 08:12:01 UTC
I was not correct about 'no action was occuring' and 'no task submitted'. I was expecting to see > 0 in Tasks bar but it was so quickly failed that the number was '0'... Anyway, I'd love to see a way to import VMs from data SDs. ~~~ engine=# select * from job; job_id | action_type | description | status | owner_id | visible | start_time | end_time | last_update_time | correlation_id | is_external | is_auto_cleared | engine_session_seq_id --------------------------------------+---------------------------+-----------------------------------------------------------------+--------+--------------------------------------+---------+ ----------------------------+----------------------------+----------------------------+--------------------------------------+-------------+-----------------+----------------------- b7001909-3990-4af9-818f-45cfa347b761 | ImportVmFromConfiguration | Importing VM jbelka-vhost1 from configuration to Cluster test01 | FAILED | 03656fe1-8a08-4926-b874-168bcec474af | t | 2017-08-04 03:45:47.033-04 | 2017-08-04 03:45:47.379-04 | 2017-08-04 03:45:47.379-04 | 0560a140-d6cd-42fb-99de-0a7fd96afd69 | f | t | 45570 f6657a5f-7b62-4ace-a4e0-edd466550082 | ImportVmFromConfiguration | Importing VM jbelka-vhost1 from configuration to Cluster test01 | FAILED | 03656fe1-8a08-4926-b874-168bcec474af | t | 2017-08-04 03:47:32.311-04 | 2017-08-04 03:47:32.613-04 | 2017-08-04 03:47:32.613-04 | 91023f53-a013-445b-be48-7bb073557aa8 | f | t | 45570 cf54692d-55c7-4561-bbdf-9dfcefd00133 | ImportVmFromConfiguration | Importing VM jbelka-vhost1 from configuration to Cluster test01 | FAILED | 03656fe1-8a08-4926-b874-168bcec474af | t | 2017-08-04 03:56:18.801-04 | 2017-08-04 03:56:19.034-04 | 2017-08-04 03:56:19.034-04 | 08e1a9e7-682f-4401-84e0-145c86df0d3e | f | t | 45570 25d2025d-ae83-4d3a-85b2-2845b6015373 | ImportVmFromConfiguration | Importing VM jbelka-vhost1 from configuration to Cluster test01 | FAILED | 03656fe1-8a08-4926-b874-168bcec474af | t | 2017-08-04 04:01:11.717-04 | 2017-08-04 04:01:11.923-04 | 2017-08-04 04:01:11.923-04 | e77253c4-39cd-4669-9886-b7d451cb615b | f | t | 45570 (4 rows) ~~~ Possibly related to bug 1461387 Hi Jiri, Can you please share the OVF of the VM: SELECT ovf_data FROM unregistered_ovf_of_entities where entity_guid = '8f8fc16a-23ee-4621-a971-fd4b5e5fe3bd' It looks like the storage domain which the memory snapshot is reside on is not exists (could be that it was not imported) we can verify that with the OVF that you will provide If there's another build for 4.1.5 I'd like to get this in, but we shouldn't block on it. (In reply to Maor from comment #4) > Hi Jiri, > > Can you please share the OVF of the VM: > SELECT ovf_data FROM unregistered_ovf_of_entities where entity_guid = > '8f8fc16a-23ee-4621-a971-fd4b5e5fe3bd' > > It looks like the storage domain which the memory snapshot is reside on is > not exists (could be that it was not imported) we can verify that with the > OVF that you will provide engine=# SELECT ovf_data FROM unregistered_ovf_of_entities where entity_guid = '8f8fc16a-23ee-4621-a971-fd4b5e5fe3bd'; ovf_data ---------- (0 rows) Thanks for the output Jiri. Can you please upload the engine log again, the character set doesn't seems to be readable. Also, can you please share the output of the unregistered_ovf_of_entities table: SELECT * FROM unregistered_ovf_of_entities; Hi Jiri, Besides the null exception, I could not find anything in the log which could help me to find the root cause of your issue. I assumed that this is because your memory volume's storage domain does not exists in your setup. In that case the import should fail with a proper error message instead of null. I uploaded two patches that should fix that issue. If you still have the storage domain with this VM which I can use to reproduce your exception that could be great, if not I assume I will continue with the fix which I prepared. Verified with the following code: --------------------------------------- ovirt-engine-4.1.6.2-0.1.el7.noarch vdsm-4.19.31-1.el7ev.x86_64 Verified with the following scenario: -------------------------------------- Steps to Reproduce: 1. have old 3.6 env, add a temp NFS share for test 2. move a vm image to this temp NFS and detach/remove the temp NFS share from 3.6 engine 3. have 4.1 env, attach the temp NFS share 4. import VM from the temp NFS - Now an error is reported indicating the the VM cannot be imported Moving to VERIFIED |