Created attachment 812070 [details] ## Logs rhevm, vdsm, libvirt, thread dump, superVdsm (iSCSI) Description of problem: Failed power-on VM after disconnections Storage Domain Version-Release number of selected component (if applicable): RHEVM 3.3 - IS18 environment: Host OS: RHEL 6.5 RHEVM: rhevm-3.3.0-0.25.beta1.el6ev.noarch PythonSDK: rhevm-sdk-python-3.3.0.15-1.el6ev.noarch VDSM: vdsm-4.13.0-0.2.beta1.el6ev.x86_64 LIBVIRT: libvirt-0.10.2-27.el6.x86_64 QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.412.el6.x86_64 SANLOCK: sanlock-2.8-1.el6.x86_64 How reproducible: unknow Steps to Reproduce: 1. Create iSCSI Data Center with two hosts connected to multiple Storage Domain (SD) 2. Create and run a vm from template with OS installed on it, run on HSM. 3. LSM the vm disk and block connectivity (via iptables) to all domains from the HSM host * HSM - non operational * VM - in pause state 4. When the vm pauses remove the iptables block from the hsm host * HSM - up * VM - up and running. OS running, and no problem connect to it. 5. Power Off VM 6. Try power-on VM Actual results: Failed power-on VM Expected results: Secceed power-on VM Impact on user: Failed power-on VM Workaround: Restart ovirt-engine Additional info: /var/log/ovirt-engine/engine.log 2013-10-14 14:59:51,305 WARN [org.ovirt.engine.core.bll.RunVmCommand] (ajp-/127.0.0.1:8702-2) [3a68d9d7] CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED,$DiskName vm001_Disk1 /var/log/vdsm/vdsm.log
This bug is related to BZ#1018867. VM is locked, because Engine assumes that VM's disks are being migrated, but migration is actually already failed. It can be closed after BZ#1018867 will be resolved.
(In reply to Sergey Gotliv from comment #1) > This bug is related to BZ#1018867. > > VM is locked, because Engine assumes that VM's disks are being migrated, but > migration is actually already failed. > > It can be closed after BZ#1018867 will be resolved. Bug 1018867 is targeted for rhev 3.4 while this bug is currently targeted for 3.3. Also, this bug is referencing a patch that has been merged upstream already, so please explain whether there is a separate solution for this bug or it needs to be pushed out to 3.4.
Sergey, please clarify so we will verify it.
Followed the reproduction steps. Power on VM for the second time after it was in paused state works fine. Verified with is25 vdsm-4.13.0-0.10.beta1.el6ev.x86_64 rhevm-3.3.0-0.37.beta1.el6ev.noarch
Closing - RHEV 3.3 Released