Bug 1018888 - Can’t run LSM to same Storage Domain after disconnections Storage Domain
Summary: Can’t run LSM to same Storage Domain after disconnections Storage Domain
Keywords:
Status: CLOSED DUPLICATE of bug 1018867
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 3.3.0
Assignee: Sergey Gotliv
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-14 15:38 UTC by vvyazmin@redhat.com
Modified: 2016-02-10 17:04 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-10-16 14:38:15 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
amureini: Triaged+


Attachments (Terms of Use)
## Logs rhevm, vdsm, libvirt, thread dump, superVdsm (iSCSI) (4.85 MB, application/x-gzip)
2013-10-14 15:38 UTC, vvyazmin@redhat.com
no flags Details

Description vvyazmin@redhat.com 2013-10-14 15:38:15 UTC
Created attachment 812092 [details]
## Logs rhevm, vdsm, libvirt, thread dump, superVdsm (iSCSI)

Description of problem:
Can’t run LSM to same Storage Domain after disconnections Storage Domain

Version-Release number of selected component (if applicable):
RHEVM 3.3 - IS18 environment:

Host OS: RHEL 6.5

RHEVM:  rhevm-3.3.0-0.25.beta1.el6ev.noarch
PythonSDK:  rhevm-sdk-python-3.3.0.15-1.el6ev.noarch
VDSM:  vdsm-4.13.0-0.2.beta1.el6ev.x86_64
LIBVIRT:  libvirt-0.10.2-27.el6.x86_64
QEMU & KVM:  qemu-kvm-rhev-0.12.1.2-2.412.el6.x86_64
SANLOCK:  sanlock-2.8-1.el6.x86_64

How reproducible:
unknow

Steps to Reproduce:
1. Create iSCSI Data Center with two hosts connected to multiple Storage Domain (SD)
2. Create and run a vm from template with OS installed on it, run on HSM.
3. LSM the vm disk and block connectivity (via iptables) to all domains from the HSM host
* HSM - non operational
* VM - in pause state
4. When the vm pauses remove the iptables block from the hsm host
* HSM - up
* VM - up and running. OS running, and no problem connect to it.
5. Power Off VM
6. Restart ovirt-engine
7. Power on VM
8. Run LSM action again

Actual results:
Failed LSM 

Expected results:
Secceed LSM

Impact on user:
Failed LSM

Workaround:
none

Additional info:

/var/log/ovirt-engine/engine.log

2013-10-14 15:13:38,865 INFO  [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-22) Polling and updating Async Tasks: 2 tasks, 1 tasks to poll now
2013-10-14 15:13:38,880 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-22) Failed in HSMGetAllTasksStatusesVDS method
2013-10-14 15:13:38,882 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-22) Error code CannotCreateLogicalVolume and error message VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume
2013-10-14 15:13:38,882 INFO  [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-22) SPMAsyncTask::PollTask: Polling task fb8ca5a5-5c99-4651-bb0d-04efc3830227 (Parent Command LiveMigrateDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status finished, result 'cleanSuccess'.
2013-10-14 15:13:38,895 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-22) BaseAsyncTask::LogEndTaskFailure: Task fb8ca5a5-5c99-4651-bb0d-04efc3830227 (Parent Command LiveMigrateDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:
-- Result: cleanSuccess
-- Message: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume,
-- Exception: VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume
2013-10-14 15:13:38,895 INFO  [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-22) CommandAsyncTask::EndActionIfNecessary: All tasks of command 4fd97a4d-00fa-4a0a-84e8-c669e7adbf66 has ended -> executing EndAction
2013-10-14 15:13:38,895 INFO  [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-22) CommandAsyncTask::EndAction: Ending action for 1 tasks (command ID: 4fd97a4d-00fa-4a0a-84e8-c669e7adbf66): calling EndAction .
2013-10-14 15:13:38,896 INFO  [org.ovirt.engine.core.bll.CommandAsyncTask] (pool-5-thread-50) CommandAsyncTask::EndCommandAction [within thread] context: Attempting to EndAction LiveMigrateDisk, executionIndex: 0
2013-10-14 15:13:39,059 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (pool-5-thread-50) [6207452e] Ending command with failure: org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand


/var/log/vdsm/vdsm.log

Comment 1 Sergey Gotliv 2013-10-16 14:38:15 UTC

*** This bug has been marked as a duplicate of bug 1018867 ***


Note You need to log in before you can comment on or make changes to this bug.