Bug 970969 - engine: engine frees lock on disk which is wipe=true before LSM has finished deleting the src disk (before LSM has finished)
Summary: engine: engine frees lock on disk which is wipe=true before LSM has finished ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.2.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.3.0
Assignee: Ayal Baron
QA Contact: Haim
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-05 11:08 UTC by Dafna Ron
Modified: 2016-02-10 20:33 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-10 09:18:04 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (1.78 MB, application/x-gzip)
2013-06-05 11:08 UTC, Dafna Ron
no flags Details

Description Dafna Ron 2013-06-05 11:08:04 UTC
Created attachment 757125 [details]
logs

Description of problem:

if we move a wipe=true disk, engine releases the lock on the disk before the src deleteImage task was finished and cleaned. 
we can see in the UI that the disk is no longer locked but event log does not report that the move has finished.

since the disk status change, and wipe after delete can take several days, we need to decided what to do in such cases since stopping the vm will cause a false failure in the LSM: 

2013-06-05 13:45:26,442 INFO  [org.ovirt.engine.core.bll.EntityAsyncTask] (pool-4-thread-50) EntityAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction LiveMigrateDisk, executionIndex: 2
2013-06-05 13:45:26,480 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (pool-4-thread-50) [38050c73] Ending command with failure: org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand

while in actuality the disk has been moved and exists in the new domain. 

Version-Release number of selected component (if applicable):

sf17.4

How reproducible:

100%

Steps to Reproduce:
1. in iscsi storage with two hosts, create a vm with wipe=true preallocated disk
2. run the vm and LSM the vm's disk
3.

Actual results:


engine releases the lock on the disk before the deletImage has finished. 
if we shut down the vm, engine will repost a failure on LSM although it succeeded and the deleteImage continues in the vds. 

Expected results:

1. we can not free the lock (taking under account that wipe can continue for days). 
2. we can free the lock but separate the deleteImage from the LSM command in case of a wipe disk. 

-- if second option is decided on, we should be make sure that we add release notes on this behaviour, we need to make sure that if we shut down the vm, migrate or do any other actions that engine does not report failure, engine should also report success of the move with disclaimer for the deleteImage on src in the event log and we need to make sure that merge will not be effected. 


Additional info: logs


[root@cougar02 ~]# lvs
  LV                                   VG                                   Attr      LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  9a4cfb95-ce35-4bfc-aa30-4b51f010f839 38755249-4bb3-4841-bf5b-05f4a521514d -wi------   3.00g                                             
  f4567cb0-86de-4b9a-ad8c-20750ecc5299 38755249-4bb3-4841-bf5b-05f4a521514d -wi------   4.00g                                             
  f82a0d58-0791-4137-b1e6-22a8794acd2a 38755249-4bb3-4841-bf5b-05f4a521514d -wi------   2.00g                                             
  ids                                  38755249-4bb3-4841-bf5b-05f4a521514d -wi-ao--- 128.00m                                             
  inbox                                38755249-4bb3-4841-bf5b-05f4a521514d -wi-a---- 128.00m                                             
  leases                               38755249-4bb3-4841-bf5b-05f4a521514d -wi-a----   2.00g                                             
  master                               38755249-4bb3-4841-bf5b-05f4a521514d -wi-a----   1.00g                                             
  metadata                             38755249-4bb3-4841-bf5b-05f4a521514d -wi-a---- 512.00m                                             
  outbox                               38755249-4bb3-4841-bf5b-05f4a521514d -wi-a---- 128.00m                                             
  ids                                  601160c1-70aa-4ca4-9f76-71b08cb3c4ae -wi------ 128.00m                                             
  inbox                                601160c1-70aa-4ca4-9f76-71b08cb3c4ae -wi------ 128.00m                                             
  leases                               601160c1-70aa-4ca4-9f76-71b08cb3c4ae -wi------   2.00g                                             
  master                               601160c1-70aa-4ca4-9f76-71b08cb3c4ae -wi------   1.00g                                             
  metadata                             601160c1-70aa-4ca4-9f76-71b08cb3c4ae -wi------ 512.00m                                             
  outbox                               601160c1-70aa-4ca4-9f76-71b08cb3c4ae -wi------ 128.00m                                             
  f82a0d58-0791-4137-b1e6-22a8794acd2a 7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-a----   2.00g                                             
  ids                                  7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-ao--- 128.00m                                             
  inbox                                7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-a---- 128.00m                                             
  leases                               7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-a----   2.00g                                             
  master                               7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-a----   1.00g                                             
  metadata                             7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-a---- 512.00m                                             
  outbox                               7414f930-bbdb-4ec6-8132-4640cbb3c722 -wi-a---- 128.00m                                             
  f82a0d58-0791-4137-b1e6-22a8794acd2a 81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-a----   2.00g                                             
  ids                                  81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-ao--- 128.00m                                             
  inbox                                81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-a---- 128.00m                                             
  leases                               81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-a----   2.00g                                             
  master                               81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-a----   1.00g                                             
  metadata                             81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-a---- 512.00m                                             
  outbox                               81ef11d0-4c0c-47b4-8953-d61a6af442d8 -wi-a---- 128.00m                                             
  lv_root                              vg0                                  -wi-ao--- 457.71g                                             
  lv_swap                              vg0                                  -wi-ao---   7.85g                                             
[root@cougar02 ~]# vdsClient -s 0 getStorageDomainInfo 38755249-4bb3-4841-bf5b-05f4a521514d
	uuid = 38755249-4bb3-4841-bf5b-05f4a521514d
	vguuid = 4Ceo5k-vMud-sVKL-blqX-JwKo-ETN6-8ao4L9
	lver = 10
	state = OK
	version = 3
	role = Master
	pool = ['7fd33b43-a9f4-4eb7-a885-e9583a929ceb']
	spm_id = 1
	type = ISCSI
	class = Data
	master_ver = 3725
	name = Dafna-32-03


engine=# SELECT image_guid,storage_name,storage_id from images_storage_domain_view where image_guid='f4567cb0-86de-4b9a-ad8c-20750ecc5299';
              image_guid              | storage_name |              storage_id              
--------------------------------------+--------------+--------------------------------------
 f4567cb0-86de-4b9a-ad8c-20750ecc5299 | Dafna-32-03  | 38755249-4bb3-4841-bf5b-05f4a521514d


	
2013-Jun-05, 13:45
	
User admin@internal have failed to move disk NEW_Disk1 to domain Dafna-32-03.
	
2013-Jun-05, 13:44
	
VM NEW powered off by admin@internal (Host: cougar02).
	
2013-Jun-05, 13:41
	
User admin@internal moving disk NEW_Disk1 to domain Dafna-32-03.
	
2013-Jun-05, 13:41
	
Snapshot 'Auto-generated for Live Storage Migration' creation for VM 'NEW' has been completed.

Comment 1 Ayal Baron 2013-07-10 09:18:04 UTC
Discussed with Haim, this is the correct behaviour (by design).
Delete of disk should not block other operations on the vm as the disk is no longer relevant.


Note You need to log in before you can comment on or make changes to this bug.