Bug 1583176 - [v2v] VDSM Libvirt ERROR cannot access backing file after importing Instance from OpenStack
Summary: [v2v] VDSM Libvirt ERROR cannot access backing file after importing Instance ...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.2.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Michal Skrivanek
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-28 12:08 UTC by Marko Vrgotic
Modified: 2020-06-26 16:38 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-01 14:48:50 UTC
oVirt Team: Virt
rbarry: ovirt-4.5?


Attachments (Terms of Use)

Description Marko Vrgotic 2018-05-28 12:08:03 UTC
NOTES:
Component related, I selected "services" as I was not sure which component group to select.
Version (above) could not find matching as in oVirt it is  4.20.27

Description of problem:

When trying to start imported VM, it fails with following message:
 
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.

Version-Release number of selected component (if applicable):

OS Version:
RHEL - 7 - 5.1804.el7.centos.2
OS Description:
CentOS Linux 7 (Core)
Kernel Version:
3.10.0 - 862.3.2.el7.x86_64
KVM Version:
2.10.0 - 21.el7_5.3.1
LIBVIRT Version:
libvirt-3.9.0-14.el7_5.5
VDSM Version:
vdsm-4.20.27.1-1.el7.centos


How reproducible:

Every Time


Steps to Reproduce:
1. qemu+tcp connect, from oVirt UI, to OpenStack KVM 
2. select and import single vm instance from Nova
3. 

Actual results:

VM Not able to boot due to inability to access backing file.

Expected results:

VM Single independent Image imported and bootable.

Additional info:

When trying to start imported VM, it fails with following message:
 
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
 
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
 
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO  [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO  [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
 
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673  (User: admin@internal-authz).
 
Checking on the Gluster volume, directory and files exist, permissions are in order:
 
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----.  1 vdsm kvm  14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba
-rw-rw----.  1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease
-rw-r--r--.  1 vdsm kvm  310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta
 
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
 
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
 
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?

Comment 1 Tomáš Golembiovský 2018-05-28 13:45:28 UTC
such import either has to be prevented or we have to squash the backing chain during import

Comment 2 Ryan Barry 2018-08-15 16:38:11 UTC
Pushing out until there's a solution for squashing the backing chain

Comment 3 Marko Vrgotic 2018-11-27 07:35:33 UTC
Hello all,

Do you know when the work will start on solution for squashing the backing chain, so that the issue reported here can be addressed?

We have about 300snowflake VMs to migrate from OpenStack to oVirt several months from now.

If the solution is not expected to be, I need to start looking into possible workarounds.

Thank you.

Comment 4 Michal Skrivanek 2020-03-18 15:45:38 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 5 Michal Skrivanek 2020-03-18 15:50:35 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 6 Marko Vrgotic 2020-03-24 08:26:09 UTC
Well, we have tear down our OpenStack and move fully with oVirt. I would not ignore this bug, but I currently have no direct benefit of having it solved. 
I leave it up to you.

Marko Vrgotic

Comment 7 Michal Skrivanek 2020-04-01 14:48:50 UTC
Closing old bug. Please reopen if still relevant/you want to work on it.


Note You need to log in before you can comment on or make changes to this bug.