Bug 989966 - VM's disk remains locked after LSM failed
VM's disk remains locked after LSM failed
Status: CLOSED DUPLICATE of bug 970889
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.3.0
x86_64 Unspecified
unspecified Severity urgent
: ---
: 3.3.0
Assigned To: Nobody's working on this, feel free to take it
Aharon Canan
storage
: Triaged
: 989955 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-30 04:29 EDT by Aharon Canan
Modified: 2016-02-10 13:45 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-30 10:44:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (1.60 MB, application/x-gzip)
2013-07-30 04:29 EDT, Aharon Canan
no flags Details

  None (edit)
Description Aharon Canan 2013-07-30 04:29:51 EDT
Created attachment 780452 [details]
logs

Description of problem:
after running LSM which failed, VM's disk remains locked and we can't start the VM.

Version-Release number of selected component (if applicable):
sf19

How reproducible:

Steps to Reproduce:
1. 2 hosts with 2 storage domain, running VM (thin disk - already migrate few times)
2. LSM the VM's disk
3. poweroff the VM

Actual results:
LSM failed, disk remains locked

Expected results:
VM's disk should be active after LSM fails

Additional info:
Comment 2 Sergey Gotliv 2013-07-30 10:59:19 EDT
*** Bug 989955 has been marked as a duplicate of this bug. ***
Comment 3 Aharon Canan 2013-07-30 11:07:08 EDT
I faced this issue on 3.2.2 
I think we should fix in 3.2.z as well

I will add "?" in 970889 for 3.2.z

Note You need to log in before you can comment on or make changes to this bug.