Bug 1303994

Summary: Host maintenance failed since VM is already running on destination host
Product: [oVirt] ovirt-engine Reporter: Israel Pinto <ipinto>
Component: BLL.VirtAssignee: Martin Betak <mbetak>
Status: CLOSED WONTFIX QA Contact: Israel Pinto <ipinto>
Severity: low Docs Contact:
Priority: low    
Version: 3.6.0.3CC: bugs, ipinto, mbetak, michal.skrivanek, mpoledni, ncredi, tjelinek
Target Milestone: ---Keywords: Automation
Target Release: ---Flags: sbonazzo: ovirt-4.1-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-07 16:42:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine_log
none
vdsm_log_1
none
vdsm_log_2
none
test_log
none
server_log
none
host_output none

Description Israel Pinto 2016-02-02 16:09:00 UTC
Description of problem:
Host maintenance failed since VM is already running on destination host.
In Automation we have 2 cases that runs one after the other in difference of about 2 seconds.
The setup has 2 hosts and one VM is running, 
The first case migrate the vm 
The second case put host to maintenance.
NOTE:
In the second case the destination host is the source in case one.  

Version-Release number of selected component (if applicable):
RHEVM Version: 3.6.3-0.1.el6 
Host: 
RHEV Hypervisor - 7.2 - 20160126.0.el7ev
vdsm-4.17.18-0.el7ev
libvirt-1.2.17-13.el7_2.2

How reproducible:
All the time in automation test

Steps to Reproduce:
2 hosts environment with 1 vm running on host
1. Migrate VM from host_1 to host_2 (test case 1)
2. Switch host_2 to maintenance (test case 2)


Actual results:
Maintenance failed since VM is still running on destination host
NOTE:
This is happening since last version, did not happened before.


Expected results:
Maintenance should succeeded 


Additional info:
Test log: 
15:27:26 ================================================================================
15:27:26 2016-02-02 15:27:26,777 - MainThread - test_runner - INFO - Starting TestMigrationVirtSanityCase_REST
15:27:26 2016-02-02 15:27:26,779 - MainThread - test_runner - DEBUG - Skipping duplicate log-messages...
15:27:26 2016-02-02 15:27:26,796 - MainThread - plmanagement.unittest_loader - INFO - setUp: TestMigrationVirtSanityCase_REST.test_migration
15:27:26 2016-02-02 15:27:26,796 - MainThread - plmanagement.unittest_loader - INFO - Test Name: TestMigrationVirtSanityCase_REST.test_migration
15:27:26 2016-02-02 15:27:26,797 - MainThread - plmanagement.unittest_loader - INFO - Test description: Check migration of one VM
15:27:26 2016-02-02 15:27:26,797 - MainThread - plmanagement.unittest_loader - INFO - Iteration Number: 2
15:27:27 2016-02-02 15:27:27,126 - MainThread - vms - INFO - Using Correlation-Id: vms_syncAction_a6f934d2-7a15-4237
15:27:49 2016-02-02 15:27:49,037 - MainThread - vms - INFO - vm status is 'up'
15:27:49 2016-02-02 15:27:49,038 - MainThread - art.rhevm_api.tests_lib.low_level.vms - INFO - Getting the golden_env_mixed_virtio_1_0 host after VM migrated.
15:27:49 2016-02-02 15:27:49,380 - MainThread - plmanagement.unittest_loader - INFO - tearDown: TestMigrationVirtSanityCase_REST.test_migration
15:27:49 2016-02-02 15:27:49,562 - MainThread - plmanagement.unittest_loader - INFO - Status: Pass
15:27:49 2016-02-02 15:27:49,567 - MainThread - test_runner - INFO - 
15:27:49 ================================================================================
15:27:49 2016-02-02 15:27:49,646 - MainThread - plmanagement.unittest_loader - INFO - setUp: TestMigrationVirtSanityCase_REST.test_migration_maintenance
15:27:49 2016-02-02 15:27:49,646 - MainThread - plmanagement.unittest_loader - INFO - Test Name: TestMigrationVirtSanityCase_REST.test_migration_maintenance
15:27:49 2016-02-02 15:27:49,647 - MainThread - plmanagement.unittest_loader - INFO - Test description: Check migration for one VM
15:27:49         by putting host into maintenance
15:27:49 2016-02-02 15:27:49,647 - MainThread - plmanagement.unittest_loader - INFO - Iteration Number: 3
15:27:49 2016-02-02 15:27:49,648 - MainThread - Virt_Migration_Cases - INFO - Check that migration for one VM by putting the host into maintenance succeed
15:27:50 2016-02-02 15:27:49,988 - MainThread - art.rhevm_api.tests_lib.high_level.vms - INFO - Setting host_mixed_1 into maintenance
15:27:50 2016-02-02 15:27:50,113 - MainThread - hosts - INFO - Using Correlation-Id: hosts_syncAction_4ef9d75f-cbfd-47cf
15:28:40 2016-02-02 15:28:40,320 - tcpdump-10.35.117.42 - plmanagement.mac_to_ip_conv - INFO - Caught 10.35.70.10 for 00:1A:4A:16:20:4A
15:29:01 2016-02-02 15:29:01,846 - tcpdump-10.35.117.42 - plmanagement.mac_to_ip_conv - INFO - Caught 10.35.117.10 for A0:36:9F:3B:19:70
15:30:06 2016-02-02 15:30:06,052 - tcpdump-10.35.117.42 - plmanagement.mac_to_ip_conv - INFO - Caught 10.35.72.134 for 00:1A:4A:16:20:46
15:30:20 2016-02-02 15:30:20,400 - tcpdump-10.35.117.42 - plmanagement.mac_to_ip_conv - INFO - Caught 10.35.70.23 for 00:1A:4A:16:20:45
15:31:06 2016-02-02 15:31:06,162 - MainThread - hosts - ERROR - Interrupt because of timeout. host status is 'preparing_for_maintenance'.
15:31:06 2016-02-02 15:31:06,163 - MainThread - art.rhevm_api.tests_lib.high_level.vms - ERROR - Failed to set host_mixed_1 into maintenance
15:31:06 2016-02-02 15:31:06,164 - MainThread - plmanagement.unittest_loader - INFO - tearDown: TestMigrationVirtSanityCase_REST.test_migration_maintenance
15:31:06 2016-02-02 15:31:06,353 - MainThread - plmanagement.error_fetcher - ERROR - Errors fetched from VDC(jenkins-vm-08.scl.lab.tlv.redhat.com): 2016-02-02 15:27:52,221 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command failed: Not SPM
15:31:06 2016-02-02 15:27:52,221 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand' return value '
15:31:06 2016-02-02 15:27:52,222 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Command 'HSMGetAllTasksStatusesVDSCommand(HostName = host_mixed_1, VdsIdVDSCommandParametersBase:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e'})' execution failed: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM
15:31:06 2016-02-02 15:27:52,245 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] START, ConnectStoragePoolVDSCommand(HostName = host_mixed_2, ConnectStoragePoolVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', vdsId='8ab30771-5530-42cd-b95b-961d0d59542a', storagePoolId='934e017a-72a5-4da1-ac1e-4db1100ba80a', masterVersion='1'}), log id: 7175d26e
15:31:06 2016-02-02 15:28:11,218 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-2) [24384375] Rerun VM '3900d08f-1be4-45c1-9660-2ddb0430d92b'. Called from VDS 'host_mixed_1'
15:31:06 2016-02-02 15:28:11,223 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-6-thread-20) [24384375] START, MigrateStatusVDSCommand(HostName = host_mixed_1, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}), log id: 68fd059c
15:31:06 2016-02-02 15:28:11,692 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [24384375] Correlation ID: 6de707d8, Job ID: b4fa6474-18e6-4f67-9e4a-b32ea816c5cd, Call Stack: null, Custom Event ID: -1, Message: Migration failed  while Host is in 'preparing for maintenance' state.
15:31:06   Consider manual intervention: stopping/migrating Vms as Host's state will not
15:31:06   turn to maintenance while VMs are still running on it.(VM: golden_env_mixed_virtio_1_0, Source: host_mixed_1, Destination: <UNKNOWN>).
15:31:06 2016-02-02 15:28:38,877 INFO  [org.ovirt.engine.core.bll.scheduling.policyunits.PowerSavingBalancePolicyUnit] (DefaultQuartzScheduler_Worker-18) [1fcd7e0] Automatic power management is disabled for cluster 'golden_env_mixed_1'.
15:31:06 
15:31:06 2016-02-02 15:31:06,434 - MainThread - test_runner - ERROR - Test Case failed: AssertionError: Maintenance test with one VM failed
15:31:06 Traceback (most recent call last):
15:31:06   File "/var/lib/jenkins/workspace/3.6-GE-compute/automation/ART/art/test_handler/plmanagement/plugins/unittest_test_runner_plugin.py", line 209, in __call__
15:31:06     self.f()
15:31:06   File "/var/lib/jenkins/workspace/3.6-GE-compute/automation/ART/art/tests/rhevmtests/virt/migration/migration_sanity_test.py", line 88, in test_migration_maintenance
15:31:06     "Maintenance test with one VM failed"
15:31:06   File "/usr/lib64/python2.6/unittest.py", line 324, in failUnless
15:31:06     if not expr: raise self.failureException, msg
15:31:06 AssertionError: Maintenance test with one VM failed
15:31:06 
15:31:06 2016-02-02 15:31:06,436 - MainThread - plmanagement.unittest_loader - ERROR - Status: Fail
15:31:06 2016-02-02 15:31:06,448 - MainThread - plmanagement.unittest_loader - INFO - TEST GROUP tearDown: TestMigrationVirtSanityCase_REST
15:31:06 2016-02-02 15:31:06,450 - MainThread - plmanagement.unittest_loader - INFO - 
----------------------------------------------------------------------------
vdsm_log source:

jsonrpc.Executor/3::ERROR::2016-02-02 13:27:52,114::task::866::Storage.TaskManager.Task::(_setError) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2210, in getAllTasksStatuses
    allTasksStatus = sp.getAllTasksStatuses()
  File "/usr/share/vdsm/storage/securable.py", line 75, in wrapper
    raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,114::task::885::Storage.TaskManager.Task::(_run) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::Task._run: 17415a09-99b3-4d57-acd9-bfac66fbc4e4 () {} failed - stopping task
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,114::task::1246::Storage.TaskManager.Task::(stop) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::stopping in state preparing (force False)
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,115::task::993::Storage.TaskManager.Task::(_decref) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::ref 1 aborting True
jsonrpc.Executor/3::INFO::2016-02-02 13:27:52,115::task::1171::Storage.TaskManager.Task::(prepare) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::aborting: Task is aborted: u'Secured object is not in safe state' - code 100
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,115::task::1176::Storage.TaskManager.Task::(prepare) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::Prepare: aborted: Secured object is not in safe state
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,115::task::993::Storage.TaskManager.Task::(_decref) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::ref 0 aborting True
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,115::task::928::Storage.TaskManager.Task::(_doAbort) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::Task._doAbort: force False
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,115::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,116::task::595::Storage.TaskManager.Task::(_updateState) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::moving from state preparing -> state aborting
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,116::task::550::Storage.TaskManager.Task::(__state_aborting) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::_aborting: recover policy none
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,116::task::595::Storage.TaskManager.Task::(_updateState) Task=`17415a09-99b3-4d57-acd9-bfac66fbc4e4`::moving from state aborting -> state failed
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,116::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,116::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
jsonrpc.Executor/3::ERROR::2016-02-02 13:27:52,116::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object is not in safe state
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper
    result = ctask.prepare(func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 104, in wrapper
    return m(self, *a, **kw)
  File "/usr/share/vdsm/storage/task.py", line 1179, in prepare
    raise self.error
SecureError: Secured object is not in safe state
jsonrpc.Executor/4::DEBUG::2016-02-02 13:27:52,206::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.migrate' in bridge with {u'params': {u'tunneled': u'false', u'dstqemu': u'10.35.117.44', u'autoConverge': u'false', u'src': u'10.35.117.42', u'dst': u'10.35.117.44:54321', u'vmId': u'3900d08f-1be4-45c1-9660-2ddb0430d92b', u'abortOnError': u'true', u'compressed': u'false', u'method': u'online'}, u'vmID': u'3900d08f-1be4-45c1-9660-2ddb0430d92b'}
jsonrpc.Executor/4::DEBUG::2016-02-02 13:27:52,207::API::546::vds::(migrate) {u'tunneled': u'false', u'dstqemu': u'10.35.117.44', u'autoConverge': u'false', u'src': u'10.35.117.42', u'dst': u'10.35.117.44:54321', u'vmId': u'3900d08f-1be4-45c1-9660-2ddb0430d92b', u'abortOnError': u'true', u'compressed': u'false', u'method': u'online'}
jsonrpc.Executor/4::DEBUG::2016-02-02 13:27:52,209::__init__::206::jsonrpc.Notification::(emit) Sending event {"params": {"3900d08f-1be4-45c1-9660-2ddb0430d92b": {"status": "Migration Source"}, "notify_time": 4728517620}, "jsonrpc": "2.0", "method": "|virt|VM_status|3900d08f-1be4-45c1-9660-2ddb0430d92b"}
jsonrpc.Executor/4::DEBUG::2016-02-02 13:27:52,209::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return 'VM.migrate' in bridge with True
Thread-6560::DEBUG::2016-02-02 13:27:52,221::migration::147::virt.vm::(_setupVdsConnection) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::Initiating connection with destination
Thread-6560::DEBUG::2016-02-02 13:27:52,222::stompreactor::389::jsonrpc.AsyncoreClient::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2016-02-02 13:27:52,223::stomp::477::yajsonrpc.protocols.stomp.AsyncClient::(_process_connected) Stomp connection established
Thread-6560::DEBUG::2016-02-02 13:27:52,227::migration::159::virt.vm::(_setupVdsConnection) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::Destination server is: 10.35.117.44:54321
Thread-6560::DEBUG::2016-02-02 13:27:52,229::migration::202::virt.vm::(_prepareGuest) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::Migration started
Thread-6560::DEBUG::2016-02-02 13:27:52,247::migration::286::virt.vm::(run) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::migration semaphore acquired after 0 seconds
Thread-6560::DEBUG::2016-02-02 13:27:52,266::stompreactor::389::jsonrpc.AsyncoreClient::(send) Sending response
Thread-6560::INFO::2016-02-02 13:27:52,276::migration::335::virt.vm::(_startUnderlyingMigration) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::Creation of destination VM took: 0 seconds
Thread-6560::ERROR::2016-02-02 13:27:52,276::migration::208::virt.vm::(_recover) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::migration destination error: Virtual machine already exists
Thread-6560::DEBUG::2016-02-02 13:27:52,276::stompreactor::389::jsonrpc.AsyncoreClient::(send) Sending response
----------------------------------------------------------------------------
vdsm_log dest:

**** Case 1 migrate VM done with success:  
Thread-46528::INFO::2016-02-02 13:27:46,654::migration::367::virt.vm::(_startUnderlyingMigration) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::migration took 19 seconds to complete
Thread-46529::DEBUG::2016-02-02 13:27:46,657::migration::450::virt.vm::(run) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::migration downtime thread exiting
Thread-46528::INFO::2016-02-02 13:27:46,657::vm::1324::virt.vm::(setDownStatus) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::Changed state to Down: Migration succeeded (code=4)

**** Case 2: maintenance
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,279::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.migrationCreate' in bridge with [u'3900d08f-1be4-45c1-9660-2ddb0430d92b', {u'guestFQDN': u'localhost.localdomain', u'acpiEnable': u'true', u'emulatedMachine': u'pc-i440fx-rhel7.2.0', u'afterMigrationStatus': u'', u'spiceSecureChannels': u'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', u'vmId': u'3900d08f-1be4-45c1-9660-2ddb0430d92b', u'memGuaranteedSize': 1024, u'transparentHugePages': u'true', u'displayPort': u'5900', u'displaySecurePort': u'5901', u'spiceSslCipherSuite': u'DEFAULT', u'cpuType': u'SandyBridge', u'smp': u'1', u'migrationDest': u'libvirt', u'guestNumaNodes': [{u'nodeIndex': 0, u'cpus': u'0', u'memory': u'1024'}], u'smartcardEnable': u'false', u'numaTune': {u'nodeset': u'0', u'mode': u'interleave'}, u'custom': {u'device_b1d570ca-b104-42e5-acab-9f5a68bcba60device_964491f9-c423-4306-b4c5-9f629157965f': u"VmDevice:{id='VmDeviceId:{deviceId='964491f9-c423-4306-b4c5-9f629157965f', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}", u'device_b1d570ca-b104-42e5-acab-9f5a68bcba60device_964491f9-c423-4306-b4c5-9f629157965fdevice_4dab3dfe-c09d-4a25-bbbf-f8fb46eba66b': u"VmDevice:{id='VmDeviceId:{deviceId='4dab3dfe-c09d-4a25-bbbf-f8fb46eba66b', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='spicevmc', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=3}', managed='false', plugged='true', readOnly='false', deviceAlias='channel2', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}", u'device_b1d570ca-b104-42e5-acab-9f5a68bcba60device_964491f9-c423-4306-b4c5-9f629157965fdevice_4dab3dfe-c09d-4a25-bbbf-f8fb46eba66bdevice_8753e4e5-5c3e-4a27-98a6-a7d64b5e5737': u"VmDevice:{id='VmDeviceId:{deviceId='8753e4e5-5c3e-4a27-98a6-a7d64b5e5737', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}", u'device_b1d570ca-b104-42e5-acab-9f5a68bcba60': u"VmDevice:{id='VmDeviceId:{deviceId='b1d570ca-b104-42e5-acab-9f5a68bcba60', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}"}, u'vmType': u'kvm', u'_srcDomXML': u'<domain type=\'kvm\' id=\'21\'>\n  <name>golden_env_mixed_virtio_1_0</name>\n  <uuid>3900d08f-1be4-45c1-9660-2ddb0430d92b</uuid>\n  <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">\n    <ovirt:qos/>\n  </metadata>\n  <maxMemory slots=\'16\' unit=\'KiB\'>4294967296</maxMemory>\n  <memory unit=\'KiB\'>1048576</memory>\n  <currentMemory unit=\'KiB\'>1048576</currentMemory>\n  <vcpu placement=\'static\' current=\'1\'>16</vcpu>\n  <cputune>\n    <shares>1020</shares>\n  </cputune>\n  <numatune>\n    <memory mode=\'interleave\' nodeset=\'0\'/>\n  </numatune>\n  <resource>\n    <partition>/machine</partition>\n  </resource>\n  <sysinfo type=\'smbios\'>\n    <system>\n      <entry name=\'manufacturer\'>Red Hat</entry>\n      <entry name=\'product\'>RHEV Hypervisor</entry>\n      <entry name=\'version\'>7.2-20160126.0.el7ev</entry>\n      <entry name=\'serial\'>4C4C4544-0054-3010-8056-C3C04F563232</entry>\n      <entry name=\'uuid\'>3900d08f-1be4-45c1-9660-2ddb0430d92b</entry>\n    </system>\n  </sysinfo>\n  <os>\n    <type arch=\'x86_64\' machine=\'pc-i440fx-rhel7.2.0\'>hvm</type>\n    <smbios mode=\'sysinfo\'/>\n  </os>\n  <features>\n    <acpi/>\n  </features>\n  <cpu mode=\'custom\' match=\'exact\'>\n    <model fallback=\'allow\'>SandyBridge</model>\n    <topology sockets=\'16\' cores=\'1\' threads=\'1\'/>\n    <numa>\n      <cell id=\'0\' cpus=\'0\' memory=\'1048576\' unit=\'KiB\'/>\n    </numa>\n  </cpu>\n  <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n    <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n    <timer name=\'pit\' tickpolicy=\'delay\'/>\n    <timer name=\'hpet\' present=\'no\'/>\n  </clock>\n  <on_poweroff>destroy</on_poweroff>\n  <on_reboot>restart</on_reboot>\n  <on_crash>destroy</on_crash>\n  <devices>\n    <emulator>/usr/libexec/qemu-kvm</emulator>\n    <disk type=\'file\' device=\'cdrom\'>\n      <driver name=\'qemu\' type=\'raw\'/>\n      <source startupPolicy=\'optional\'/>\n      <backingStore/>\n      <target dev=\'hdc\' bus=\'ide\'/>\n      <readonly/>\n      <serial></serial>\n      <alias name=\'ide0-1-0\'/>\n      <address type=\'drive\' controller=\'0\' bus=\'1\' target=\'0\' unit=\'0\'/>\n    </disk>\n    <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n      <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\'/>\n      <source file=\'/rhev/data-center/934e017a-72a5-4da1-ac1e-4db1100ba80a/42f9467c-8fc5-452e-a2df-fe146da8f4eb/images/50f46df5-810a-47c7-98ad-66961a826ba0/d2dc0f19-af96-49cf-bfb3-9d1292985218\'>\n        <seclabel model=\'selinux\' labelskip=\'yes\'/>\n      </source>\n      <backingStore/>\n      <target dev=\'vda\' bus=\'virtio\'/>\n      <serial>50f46df5-810a-47c7-98ad-66961a826ba0</serial>\n      <boot order=\'1\'/>\n      <alias name=\'virtio-disk0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x07\' function=\'0x0\'/>\n    </disk>\n    <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n      <alias name=\'scsi0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x05\' function=\'0x0\'/>\n    </controller>\n    <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n      <alias name=\'virtio-serial0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x06\' function=\'0x0\'/>\n    </controller>\n    <controller type=\'ide\' index=\'0\'>\n      <alias name=\'ide\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x1\'/>\n    </controller>\n    <controller type=\'usb\' index=\'0\'>\n      <alias name=\'usb\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x2\'/>\n    </controller>\n    <controller type=\'pci\' index=\'0\' model=\'pci-root\'>\n      <alias name=\'pci.0\'/>\n    </controller>\n    <interface type=\'bridge\'>\n      <mac address=\'00:1a:4a:16:88:71\'/>\n      <source bridge=\'ovirtmgmt\'/>\n      <target dev=\'vnet0\'/>\n      <model type=\'virtio\'/>\n      <filterref filter=\'vdsm-no-mac-spoofing\'/>\n      <link state=\'up\'/>\n      <alias name=\'net0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\'/>\n    </interface>\n    <channel type=\'unix\'>\n      <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/3900d08f-1be4-45c1-9660-2ddb0430d92b.com.redhat.rhevm.vdsm\'/>\n      <target type=\'virtio\' name=\'com.redhat.rhevm.vdsm\' state=\'connected\'/>\n      <alias name=\'channel0\'/>\n      <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n    </channel>\n    <channel type=\'unix\'>\n      <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/3900d08f-1be4-45c1-9660-2ddb0430d92b.org.qemu.guest_agent.0\'/>\n      <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'disconnected\'/>\n      <alias name=\'channel1\'/>\n      <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n    </channel>\n    <channel type=\'spicevmc\'>\n      <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n      <alias name=\'channel2\'/>\n      <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n    </channel>\n    <input type=\'mouse\' bus=\'ps2\'/>\n    <input type=\'keyboard\' bus=\'ps2\'/>\n    <graphics type=\'spice\' port=\'5900\' tlsPort=\'5901\' autoport=\'yes\' listen=\'10.35.117.42\' passwdValidTo=\'1970-01-01T00:00:01\'>\n      <listen type=\'network\' address=\'10.35.117.42\' network=\'vdsm-ovirtmgmt\'/>\n    </graphics>\n    <sound model=\'ich6\'>\n      <alias name=\'sound0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x04\' function=\'0x0\'/>\n    </sound>\n    <video>\n      <model type=\'qxl\' ram=\'65536\' vram=\'8192\' vgamem=\'16384\' heads=\'1\'/>\n      <alias name=\'video0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\'/>\n    </video>\n    <memballoon model=\'virtio\'>\n      <alias name=\'balloon0\'/>\n      <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x08\' function=\'0x0\'/>\n    </memballoon>\n  </devices>\n  <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n    <label>system_u:system_r:svirt_t:s0:c445,c684</label>\n    <imagelabel>system_u:object_r:svirt_image_t:s0:c445,c684</imagelabel>\n  </seclabel>\n</domain>\n', u'memSize': 1024, u'smpCoresPerSocket': u'1', u'vmName': u'golden_env_mixed_virtio_1_0', u'nice': u'0', u'username': u'None', u'maxMemSize': 4194304, u'bootMenuEnable': u'false', u'copyPasteEnable': u'true', u'displayIp': u'10.35.117.42', u'keyboardLayout': u'en-us', u'smpThreadsPerCore': u'1', u'guestDiskMapping': {u'50f46df5-810a-47c7-9': {u'name': u'/dev/vda'}, u'QEMU_DVD-ROM_': {u'name': u'/dev/sr0'}}, u'clientIp': u'', u'fileTransferEnable': u'true', u'nicModel': u'rtl8139,pv', u'elapsedTimeOffset': 206.12603092193604, u'kvmEnable': u'true', u'pitReinjection': u'false', u'displayNetwork': u'ovirtmgmt', u'devices': [{u'alias': u'sound0', u'specParams': {}, u'deviceId': u'485e0d86-0333-4930-8a19-6c302e5cb162', u'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'ich6', u'type': u'sound'}, {u'target': 1048576, u'alias': u'balloon0', u'specParams': {u'model': u'virtio'}, u'deviceId': u'd950b94f-adab-4dea-8810-1c3844272413', u'address': {u'slot': u'0x08', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'memballoon', u'type': u'balloon'}, {u'device': u'unix', u'alias': u'channel0', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias': u'channel1', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'2'}}, {u'device': u'spicevmc', u'alias': u'channel2', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'3'}}, {u'index': u'0', u'alias': u'scsi0', u'specParams': {}, u'deviceId': u'aca1ae79-c387-40b0-ba06-4bbaa49b637a', u'address': {u'slot': u'0x05', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'scsi', u'model': u'virtio-scsi', u'type': u'controller'}, {u'alias': u'virtio-serial0', u'specParams': {}, u'deviceId': u'700bbee9-f09b-4f91-a89b-c9e570c1916a', u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'virtio-serial', u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x2'}}, {u'device': u'ide', u'alias': u'ide', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {u'alias': u'video0', u'specParams': {u'vram': u'8192', u'ram': u'65536', u'heads': u'1', u'vgamem': u'16384'}, u'deviceId': u'd24614cb-5644-4223-a434-f232a316eecb', u'address': {u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'qxl', u'type': u'video'}, {u'tlsPort': u'5901', u'specParams': {u'fileTransferEnable': u'true', u'copyPasteEnable': u'true', u'displayIp': u'10.35.117.42', u'displayNetwork': u'ovirtmgmt'}, u'deviceId': u'03b2133a-4905-4cac-90fc-16efeb5142f1', u'device': u'spice', u'type': u'graphics', u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr': u'00:1a:4a:16:88:71', u'linkActive': True, u'network': u'ovirtmgmt', u'alias': u'net0', u'filter': u'vdsm-no-mac-spoofing', u'specParams': {u'inbound': {}, u'outbound': {}}, u'deviceId': u'55ccf2f3-234d-46c1-94f1-a1ecd10cb6de', u'address': {u'slot': u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name': u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias': u'ide0-1-0', u'specParams': {u'path': u''}, u'readonly': u'True', u'deviceId': u'ab839c09-30ce-428f-b9fb-fe08cbaf7ed0', u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared': u'false', u'path': u'', u'type': u'disk'}, {u'poolID': u'934e017a-72a5-4da1-ac1e-4db1100ba80a', u'reqsize': u'0', u'index': 0, u'iface': u'virtio', u'apparentsize': u'2353659904', u'alias': u'virtio-disk0', u'imageID': u'50f46df5-810a-47c7-98ad-66961a826ba0', u'readonly': u'False', u'shared': u'false', u'truesize': u'2362871808', u'type': u'disk', u'domainID': u'42f9467c-8fc5-452e-a2df-fe146da8f4eb', u'volumeInfo': {u'domainID': u'42f9467c-8fc5-452e-a2df-fe146da8f4eb', u'volType': u'path', u'leaseOffset': 0, u'volumeID': u'd2dc0f19-af96-49cf-bfb3-9d1292985218', u'leasePath': u'/rhev/data-center/mnt/10.35.118.113:_nas01_ge__1__nfs__0/42f9467c-8fc5-452e-a2df-fe146da8f4eb/images/50f46df5-810a-47c7-98ad-66961a826ba0/d2dc0f19-af96-49cf-bfb3-9d1292985218.lease', u'imageID': u'50f46df5-810a-47c7-98ad-66961a826ba0', u'path': u'/rhev/data-center/mnt/10.35.118.113:_nas01_ge__1__nfs__0/42f9467c-8fc5-452e-a2df-fe146da8f4eb/images/50f46df5-810a-47c7-98ad-66961a826ba0/d2dc0f19-af96-49cf-bfb3-9d1292985218'}, u'format': u'cow', u'deviceId': u'50f46df5-810a-47c7-98ad-66961a826ba0', u'address': {u'slot': u'0x07', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path': u'/rhev/data-center/934e017a-72a5-4da1-ac1e-4db1100ba80a/42f9467c-8fc5-452e-a2df-fe146da8f4eb/images/50f46df5-810a-47c7-98ad-66961a826ba0/d2dc0f19-af96-49cf-bfb3-9d1292985218', u'propagateErrors': u'off', u'optional': u'false', u'name': u'vda', u'bootOrder': u'1', u'volumeID': u'd2dc0f19-af96-49cf-bfb3-9d1292985218', u'specParams': {}, u'volumeChain': [{u'domainID': u'42f9467c-8fc5-452e-a2df-fe146da8f4eb', u'volType': u'path', u'leaseOffset': 0, u'volumeID': u'd2dc0f19-af96-49cf-bfb3-9d1292985218', u'leasePath': u'/rhev/data-center/mnt/10.35.118.113:_nas01_ge__1__nfs__0/42f9467c-8fc5-452e-a2df-fe146da8f4eb/images/50f46df5-810a-47c7-98ad-66961a826ba0/d2dc0f19-af96-49cf-bfb3-9d1292985218.lease', u'imageID': u'50f46df5-810a-47c7-98ad-66961a826ba0', u'path': u'/rhev/data-center/mnt/10.35.118.113:_nas01_ge__1__nfs__0/42f9467c-8fc5-452e-a2df-fe146da8f4eb/images/50f46df5-810a-47c7-98ad-66961a826ba0/d2dc0f19-af96-49cf-bfb3-9d1292985218'}]}], u'display': u'qxl', u'status': u'Up', u'timeOffset': u'0', u'maxVCpus': u'16', u'guestIPs': u'10.35.118.2', u'statusTime': u'4728517640', u'maxMemSlots': 16}]
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,282::API::581::vds::(migrationCreate) Migration create
jsonrpc.Executor/3::WARNING::2016-02-02 13:27:52,282::API::189::vds::(create) vm 3900d08f-1be4-45c1-9660-2ddb0430d92b already exists
jsonrpc.Executor/3::DEBUG::2016-02-02 13:27:52,283::API::586::vds::(migrationCreate) Migration create - Failed
jsonrpc.Executor/1::DEBUG::2016-02-02 13:27:52,286::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.destroy' in bridge with [u'3900d08f-1be4-45c1-9660-2ddb0430d92b']
jsonrpc.Executor/1::INFO::2016-02-02 13:27:52,287::API::341::vds::(destroy) vmContainerLock acquired by vm 3900d08f-1be4-45c1-9660-2ddb0430d92b
jsonrpc.Executor/1::DEBUG::2016-02-02 13:27:52,287::vm::3861::virt.vm::(destroy) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::destroy Called
jsonrpc.Executor/1::INFO::2016-02-02 13:27:52,288::vm::3793::virt.vm::(releaseVm) vmId=`3900d08f-1be4-45c1-9660-2ddb0430d92b`::Release VM resources
 
----------------------------------------------------------------------------
Engine log: 
action hosts_syncAction_4ef9d75f-cbfd-47cf
2016-02-02 15:27:50,232 INFO  [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[934e017a-72a5-4da1-ac1e-4db1100ba80a=<POOL, ACTION_TYPE_FAILED_OBJECT_LOCKED>]'}'
2016-02-02 15:27:50,242 INFO  [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] Running command: MaintenanceNumberOfVdssCommand internal: false. Entities affected :  ID: 38967c4a-d2ca-4743-8532-428756f9c13e Type: VDSAction group MANIPULATE_HOST with role type ADMIN
2016-02-02 15:27:50,245 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] START, SetVdsStatusVDSCommand(HostName = host_mixed_1, SetVdsStatusVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', status='PreparingForMaintenance', nonOperationalReason='NONE', stopSpmFailureLogged='true', maintenanceReason='null'}), log id: 25427c
2016-02-02 15:27:50,245 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] VDS 'host_mixed_1' is spm and moved from up calling resetIrs.
2016-02-02 15:27:50,246 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{runAsync='true', storagePoolId='934e017a-72a5-4da1-ac1e-4db1100ba80a', ignoreFailoverLimit='false', vdsId='38967c4a-d2ca-4743-8532-428756f9c13e', ignoreStopFailed='false'}), log id: 5ef7cf2b
2016-02-02 15:27:50,249 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] START, SpmStopVDSCommand(HostName = host_mixed_1, SpmStopVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', storagePoolId='934e017a-72a5-4da1-ac1e-4db1100ba80a'}), log id: 15539e5f
2016-02-02 15:27:51,256 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] SpmStopVDSCommand::Stopping SPM on vds 'host_mixed_1', pool id '934e017a-72a5-4da1-ac1e-4db1100ba80a'
2016-02-02 15:27:52,079 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] FINISH, SpmStopVDSCommand, log id: 15539e5f
2016-02-02 15:27:52,085 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] FINISH, ResetIrsVDSCommand, log id: 5ef7cf2b
2016-02-02 15:27:52,089 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] FINISH, SetVdsStatusVDSCommand, log id: 25427c
2016-02-02 15:27:52,092 INFO  [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[934e017a-72a5-4da1-ac1e-4db1100ba80a=<POOL, ACTION_TYPE_FAILED_OBJECT_LOCKED>]'}'
2016-02-02 15:27:52,102 INFO  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Running command: SetStoragePoolStatusCommand internal: true. Entities affected :  ID: 934e017a-72a5-4da1-ac1e-4db1100ba80a Type: StoragePool
2016-02-02 15:27:52,104 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '9c5b3d71-cbb4-45e8-80bf-f7c80246ee6c' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,105 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain 'fa0c3a31-74af-443d-9330-4acdf4f52145' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,105 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '2d8c69aa-5814-4458-bc77-66bc9957ee3b' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,106 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '42f9467c-8fc5-452e-a2df-fe146da8f4eb' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,109 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '0f683dac-2c80-47e2-932c-bb085ab5d593' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,111 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '34b79c8e-e0bc-47b5-80ad-72f0ba764677' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,112 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '51338d66-1926-4084-b7ca-ab4f296eb69a' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,113 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain '12b7440e-b056-4d49-837f-a587de96af92' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,113 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain 'f1148ec1-272d-4e88-9a70-c80384443d6a' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,114 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Storage Pool '934e017a-72a5-4da1-ac1e-4db1100ba80a' - Updating Storage Domain 'a3079d40-73e8-4113-90f9-1e96a1e22186' status from 'Active' to 'Unknown', reason: null
2016-02-02 15:27:52,118 INFO  [org.ovirt.engine.core.bll.MaintenanceVdsCommand] (ajp-/127.0.0.1:8702-3) [hosts_syncAction_4ef9d75f-cbfd-47cf] Running command: MaintenanceVdsCommand internal: true. Entities affected :  ID: 38967c4a-d2ca-4743-8532-428756f9c13e Type: VDS
2016-02-02 15:27:52,122 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Correlation ID: fe3ffad, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center golden_env_mixed. Setting Data Center status to Non Responsive (On host Unavailable, Error: done).
2016-02-02 15:27:52,141 INFO  [org.ovirt.engine.core.bll.InternalMigrateVmCommand] (ajp-/127.0.0.1:8702-3) [6de707d8] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3900d08f-1be4-45c1-9660-2ddb0430d92b=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName golden_env_mixed_virtio_1_0>]', sharedLocks='null'}'
2016-02-02 15:27:52,175 INFO  [org.ovirt.engine.core.bll.InternalMigrateVmCommand] (ajp-/127.0.0.1:8702-3) [6de707d8] Running command: InternalMigrateVmCommand internal: true. Entities affected :  ID: 3900d08f-1be4-45c1-9660-2ddb0430d92b Type: VM
2016-02-02 15:27:52,215 INFO  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (ajp-/127.0.0.1:8702-3) [6de707d8] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b', srcHost='10.35.117.42', dstVdsId='8ab30771-5530-42cd-b95b-961d0d59542a', dstHost='10.35.117.44:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='false', migrateCompressed='false', consoleAddress='null'}), log id: 42b940d8
2016-02-02 15:27:52,216 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (ajp-/127.0.0.1:8702-3) [6de707d8] START, MigrateBrokerVDSCommand(HostName = host_mixed_1, MigrateVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b', srcHost='10.35.117.42', dstVdsId='8ab30771-5530-42cd-b95b-961d0d59542a', dstHost='10.35.117.44:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='false', migrateCompressed='false', consoleAddress='null'}), log id: 10504721
2016-02-02 15:27:52,221 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command failed: Not SPM
2016-02-02 15:27:52,221 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand' return value '
TaskStatusListReturnForXmlRpc:{status='StatusForXmlRpc [code=654, message=Not SPM]'}
'
2016-02-02 15:27:52,222 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] HostName = host_mixed_1
2016-02-02 15:27:52,222 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Command 'HSMGetAllTasksStatusesVDSCommand(HostName = host_mixed_1, VdsIdVDSCommandParametersBase:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e'})' execution failed: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM
2016-02-02 15:27:52,245 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] START, ConnectStoragePoolVDSCommand(HostName = host_mixed_2, ConnectStoragePoolVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', vdsId='8ab30771-5530-42cd-b95b-961d0d59542a', storagePoolId='934e017a-72a5-4da1-ac1e-4db1100ba80a', masterVersion='1'}), log id: 7175d26e
2016-02-02 15:27:54,248 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (ajp-/127.0.0.1:8702-3) [6de707d8] FINISH, MigrateBrokerVDSCommand, log id: 10504721
2016-02-02 15:27:54,255 INFO  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (ajp-/127.0.0.1:8702-3) [6de707d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 42b940d8
2016-02-02 15:27:54,260 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-3) [6de707d8] Correlation ID: hosts_syncAction_4ef9d75f-cbfd-47cf, Job ID: b4fa6474-18e6-4f67-9e4a-b32ea816c5cd, Call Stack: null, Custom Event ID: -1, Message: Host host_mixed_1 was switched to Maintenance mode by admin@internal (Reason:  No reason was returned for this operation failure. See logs for further details.).
2016-02-02 15:27:55,163 WARN  [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-60) [17cf2ae6] skipping VM '3900d08f-1be4-45c1-9660-2ddb0430d92b' from this monitoring cycle - the VM data has changed since fetching the data
2016-02-02 15:27:55,163 WARN  [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-60) [17cf2ae6] VM '3900d08f-1be4-45c1-9660-2ddb0430d92b' not in changed list, skipping devices update.
2016-02-02 15:27:56,374 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] FINISH, ConnectStoragePoolVDSCommand, log id: 7175d26e
2016-02-02 15:27:56,486 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-81) [fe3ffad] hostFromVds::selectedVds - 'host_mixed_2', spmStatus 'Free', storage pool 'golden_env_mixed', storage pool version '3.6'
2016-02-02 15:27:56,490 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-81) [fe3ffad] starting spm on vds 'host_mixed_2', storage pool 'golden_env_mixed', prevId '-1', LVER '-1'
2016-02-02 15:27:56,491 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] START, SpmStartVDSCommand(HostName = host_mixed_2, SpmStartVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', storagePoolId='934e017a-72a5-4da1-ac1e-4db1100ba80a', prevId='-1', prevLVER='-1', storagePoolFormatType='V3', recoveryMode='Manual', SCSIFencing='false'}), log id: 33404461
2016-02-02 15:27:57,511 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] spmStart polling started: taskId 'ed818153-75ce-4658-86df-cd2210f8fcf0'
2016-02-02 15:28:01,527 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] spmStart polling ended: taskId 'ed818153-75ce-4658-86df-cd2210f8fcf0' task status 'finished'
2016-02-02 15:28:01,551 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] spmStart polling ended, spm status: SPM
2016-02-02 15:28:01,552 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] START, HSMClearTaskVDSCommand(HostName = host_mixed_2, HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', taskId='ed818153-75ce-4658-86df-cd2210f8fcf0'}), log id: 67352dff
2016-02-02 15:28:02,558 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] FINISH, HSMClearTaskVDSCommand, log id: 67352dff
2016-02-02 15:28:02,558 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-81) [fe3ffad] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@1d736bb6, log id: 33404461
2016-02-02 15:28:02,572 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Initialize Irs proxy from vds: 10.35.117.44
2016-02-02 15:28:02,580 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-81) [fe3ffad] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host host_mixed_2 (Address: 10.35.117.44).
2016-02-02 15:28:02,580 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /10.35.117.44
2016-02-02 15:28:02,601 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-37) [fe3ffad] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true', storagePoolId='934e017a-72a5-4da1-ac1e-4db1100ba80a', ignoreFailoverLimit='false'}), log id: 54f1560b
2016-02-02 15:28:03,771 INFO  [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-6-thread-24) [5bf9b5db] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected :  ID: f1148ec1-272d-4e88-9a70-c80384443d6a Type: Storage
2016-02-02 15:28:03,776 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-24) [5bf9b5db] START, GetVGInfoVDSCommand(HostName = host_mixed_2, GetVGInfoVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', VGID='zYh5bc-EGll-D1zf-yMHZ-jI1v-Zud8-Id16bc'}), log id: 6655851f
2016-02-02 15:28:03,779 INFO  [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-6-thread-4) [5f4e9642] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected :  ID: a3079d40-73e8-4113-90f9-1e96a1e22186 Type: Storage
2016-02-02 15:28:03,785 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-4) [5f4e9642] START, GetVGInfoVDSCommand(HostName = host_mixed_2, GetVGInfoVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', VGID='amBcUb-5QDH-bo7Y-BlrB-f3qJ-NQFA-NebxgV'}), log id: 4d0e9efd
2016-02-02 15:28:03,788 INFO  [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-6-thread-38) [608e4a02] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected :  ID: 9c5b3d71-cbb4-45e8-80bf-f7c80246ee6c Type: Storage
2016-02-02 15:28:03,793 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-38) [608e4a02] START, GetVGInfoVDSCommand(HostName = host_mixed_2, GetVGInfoVDSCommandParameters:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a', VGID='QLS35S-CkRp-TUvr-Far5-CLVV-xzUr-3Op35X'}), log id: 6f1ddfcf
2016-02-02 15:28:03,828 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-37) [fe3ffad] -- executeIrsBrokerCommand: Attempting on storage pool '934e017a-72a5-4da1-ac1e-4db1100ba80a'
2016-02-02 15:28:03,829 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-37) [fe3ffad] START, HSMGetAllTasksInfoVDSCommand(HostName = host_mixed_2, VdsIdVDSCommandParametersBase:{runAsync='true', hostId='8ab30771-5530-42cd-b95b-961d0d59542a'}), log id: 75b2e75e
2016-02-02 15:28:04,426 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-37) [fe3ffad] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 75b2e75e
2016-02-02 15:28:04,427 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-37) [fe3ffad] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 54f1560b
2016-02-02 15:28:04,427 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-37) [fe3ffad] Discovered no tasks on Storage Pool 'golden_env_mixed'
2016-02-02 15:28:04,585 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-24) [5bf9b5db] FINISH, GetVGInfoVDSCommand, return: [LUNs:{id='3600a09803830347a625d467a79745233', physicalVolumeId='kSZ7Xe-n3RI-XWq2-yYQD-fK1X-uVCM-c1nOCB', volumeGroupId='zYh5bc-EGll-D1zf-yMHZ-jI1v-Zud8-Id16bc', serial='SNETAPP_LUN_C-Mode_804zb_FzytR3', lunMapping='1', vendorId='NETAPP', productId='LUN C-Mode', lunConnections='[StorageServerConnections:{id='null', connection='10.35.118.118', iqn='iqn.1992-08.com.netapp:sn.9a14187dd2e911e4bbb900a09861387c:vs.6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.118.118', iqn='iqn.1992-08.com.netapp:sn.9a14187dd2e911e4bbb900a09861387c:vs.6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='150', pvSize='0', vendorName='NETAPP', pathsDictionary='[sdc=true, sdi=true]', pathsCapacity='[sdc=150, sdi=150]', lunType='ISCSI', status='null', diskId='null', diskAlias='null', storageDomainId='f1148ec1-272d-4e88-9a70-c80384443d6a', storageDomainName='null'}], log id: 6655851f
2016-02-02 15:28:04,594 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-4) [5f4e9642] FINISH, GetVGInfoVDSCommand, return: [LUNs:{id='3600a09803830347a625d467a79745234', physicalVolumeId='bceJ9e-BEdB-DKuC-btIB-WfXH-0heD-pO1lo3', volumeGroupId='amBcUb-5QDH-bo7Y-BlrB-f3qJ-NQFA-NebxgV', serial='SNETAPP_LUN_C-Mode_804zb_FzytR4', lunMapping='2', vendorId='NETAPP', productId='LUN C-Mode', lunConnections='[StorageServerConnections:{id='null', connection='10.35.118.118', iqn='iqn.1992-08.com.netapp:sn.9a14187dd2e911e4bbb900a09861387c:vs.6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.118.118', iqn='iqn.1992-08.com.netapp:sn.9a14187dd2e911e4bbb900a09861387c:vs.6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='150', pvSize='0', vendorName='NETAPP', pathsDictionary='[sdd=true, sdj=true]', pathsCapacity='[sdd=150, sdj=150]', lunType='ISCSI', status='null', diskId='null', diskAlias='null', storageDomainId='a3079d40-73e8-4113-90f9-1e96a1e22186', storageDomainName='null'}], log id: 4d0e9efd
2016-02-02 15:28:06,025 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-38) [608e4a02] FINISH, GetVGInfoVDSCommand, return: [LUNs:{id='3600a09803830347a625d467a79745232', physicalVolumeId='7Cd1PA-yFIe-yCqD-uPQ0-r39K-SWh1-30FZWK', volumeGroupId='QLS35S-CkRp-TUvr-Far5-CLVV-xzUr-3Op35X', serial='SNETAPP_LUN_C-Mode_804zb_FzytR2', lunMapping='0', vendorId='NETAPP', productId='LUN C-Mode', lunConnections='[StorageServerConnections:{id='null', connection='10.35.118.118', iqn='iqn.1992-08.com.netapp:sn.9a14187dd2e911e4bbb900a09861387c:vs.6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.118.118', iqn='iqn.1992-08.com.netapp:sn.9a14187dd2e911e4bbb900a09861387c:vs.6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='150', pvSize='0', vendorName='NETAPP', pathsDictionary='[sdb=true, sde=true]', pathsCapacity='[sdb=150, sde=150]', lunType='ISCSI', status='null', diskId='null', diskAlias='null', storageDomainId='9c5b3d71-cbb4-45e8-80bf-f7c80246ee6c', storageDomainName='null'}], log id: 6f1ddfcf
2016-02-02 15:28:11,177 INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (DefaultQuartzScheduler_Worker-2) [24384375] VM '3900d08f-1be4-45c1-9660-2ddb0430d92b'(golden_env_mixed_virtio_1_0) moved from 'MigratingFrom' --> 'Up'
2016-02-02 15:28:11,177 INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (DefaultQuartzScheduler_Worker-2) [24384375] Adding VM '3900d08f-1be4-45c1-9660-2ddb0430d92b' to re-run list
2016-02-02 15:28:11,185 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler_Worker-2) [24384375] START, FullListVDSCommand(HostName = , FullListVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', vds='Host[,38967c4a-d2ca-4743-8532-428756f9c13e]', vmIds='[3900d08f-1be4-45c1-9660-2ddb0430d92b]'}), log id: 19929ea8
2016-02-02 15:28:11,193 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler_Worker-2) [24384375] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.2.0, afterMigrationStatus=, pid=77773, memGuaranteedSize=1024, transparentHugePages=true, displayPort=5900, displaySecurePort=5901, spiceSslCipherSuite=DEFAULT, cpuType=SandyBridge, smp=1, guestNumaNodes=[Ljava.lang.Object;@68ee34b5, smartcardEnable=false, numaTune={nodeset=0, mode=interleave}, custom={device_b1d570ca-b104-42e5-acab-9f5a68bcba60device_964491f9-c423-4306-b4c5-9f629157965f=VmDevice:{id='VmDeviceId:{deviceId='964491f9-c423-4306-b4c5-9f629157965f', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_b1d570ca-b104-42e5-acab-9f5a68bcba60device_964491f9-c423-4306-b4c5-9f629157965fdevice_4dab3dfe-c09d-4a25-bbbf-f8fb46eba66b=VmDevice:{id='VmDeviceId:{deviceId='4dab3dfe-c09d-4a25-bbbf-f8fb46eba66b', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='spicevmc', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=3}', managed='false', plugged='true', readOnly='false', deviceAlias='channel2', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_b1d570ca-b104-42e5-acab-9f5a68bcba60device_964491f9-c423-4306-b4c5-9f629157965fdevice_4dab3dfe-c09d-4a25-bbbf-f8fb46eba66bdevice_8753e4e5-5c3e-4a27-98a6-a7d64b5e5737=VmDevice:{id='VmDeviceId:{deviceId='8753e4e5-5c3e-4a27-98a6-a7d64b5e5737', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_b1d570ca-b104-42e5-acab-9f5a68bcba60=VmDevice:{id='VmDeviceId:{deviceId='b1d570ca-b104-42e5-acab-9f5a68bcba60', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}}, vmType=kvm, memSize=1024, smpCoresPerSocket=1, vmName=golden_env_mixed_virtio_1_0, nice=0, status=Up, maxMemSize=4194304, bootMenuEnable=false, vmId=3900d08f-1be4-45c1-9660-2ddb0430d92b, copyPasteEnable=true, displayIp=10.35.117.42, smpThreadsPerCore=1, guestDiskMapping={50f46df5-810a-47c7-9={name=/dev/vda}, QEMU_DVD-ROM_={name=/dev/sr0}}, spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard, fileTransferEnable=true, nicModel=rtl8139,pv, keyboardLayout=en-us, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@6ec68ce4, display=qxl, timeOffset=0, maxVCpus=16, clientIp=, statusTime=4728536590, maxMemSlots=16}], log id: 19929ea8
2016-02-02 15:28:11,196 INFO  [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-2) [24384375] Received a spice Device without an address when processing VM 3900d08f-1be4-45c1-9660-2ddb0430d92b devices, skipping device: {tlsPort=5901, specParams={fileTransferEnable=true, copyPasteEnable=true, displayIp=10.35.117.42, displayNetwork=ovirtmgmt}, deviceType=graphics, deviceId=03b2133a-4905-4cac-90fc-16efeb5142f1, device=spice, type=graphics, port=5900}
2016-02-02 15:28:11,218 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-2) [24384375] Rerun VM '3900d08f-1be4-45c1-9660-2ddb0430d92b'. Called from VDS 'host_mixed_1'
2016-02-02 15:28:11,223 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-6-thread-20) [24384375] START, MigrateStatusVDSCommand(HostName = host_mixed_1, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='38967c4a-d2ca-4743-8532-428756f9c13e', vmId='3900d08f-1be4-45c1-9660-2ddb0430d92b'}), log id: 68fd059c
2016-02-02 15:28:11,658 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (org.ovirt.thread.pool-6-thread-20) [24384375] FINISH, MigrateStatusVDSCommand, log id: 68fd059c
2016-02-02 15:28:11,668 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [24384375] Correlation ID: 6de707d8, Job ID: b4fa6474-18e6-4f67-9e4a-b32ea816c5cd, Call Stack: null, Custom Event ID: -1, Message: Failed to migrate VM golden_env_mixed_virtio_1_0 to Host host_mixed_2 . Trying to migrate to another Host.
2016-02-02 15:28:11,684 WARN  [org.ovirt.engine.core.bll.InternalMigrateVmCommand] (org.ovirt.thread.pool-6-thread-20) [24384375] CanDoAction of action 'InternalMigrateVm' failed for user admin@internal. Reasons: VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2016-02-02 15:28:11,684 INFO  [org.ovirt.engine.core.bll.InternalMigrateVmCommand] (org.ovirt.thread.pool-6-thread-20) [24384375] Lock freed to object 'EngineLock:{exclusiveLocks='[3900d08f-1be4-45c1-9660-2ddb0430d92b=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName golden_env_mixed_virtio_1_0>]', sharedLocks='null'}'
2016-02-02 15:28:11,692 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [24384375] Correlation ID: 6de707d8, Job ID: b4fa6474-18e6-4f67-9e4a-b32ea816c5cd, Call Stack: null, Custom Event ID: -1, Message: Migration failed  while Host is in 'preparing for maintenance' state.
  Consider manual intervention: stopping/migrating Vms as Host's state will not
  turn to maintenance while VMs are still running on it.(VM: golden_env_mixed_virtio_1_0, Source: host_mixed_1, Destination: <UNKNOWN>).
2016-02-02 15:28:38,877 INFO  [org.ovirt.engine.core.bll.scheduling.policyunits.PowerSavingBalancePolicyUnit] (DefaultQuartzScheduler_Worker-18) [1fcd7e0] Automatic power management is disabled for cluster 'golden_env_mixed_1'.
2016-02-02 15:29:38,890 INFO  [org.ovirt.engine.core.bll.scheduling.policyunits.PowerSavingBalancePolicyUnit] (DefaultQuartzScheduler_Worker-50) [31ce5ebb] Automatic power management is disabled for cluster 'golden_env_mixed_1'.
2016-02-02 15:30:38,906 INFO  [org.ovirt.engine.core.bll.scheduling.policyunits.PowerSavingBalancePolicyUnit] (DefaultQuartzScheduler_Worker-17) [605d0d2b] Automatic power management is disabled for cluster 'golden_env_mixed_1'.

Comment 1 Israel Pinto 2016-02-02 16:09:59 UTC
Created attachment 1120489 [details]
engine_log

Comment 2 Israel Pinto 2016-02-02 16:12:16 UTC
Created attachment 1120491 [details]
vdsm_log_1

Comment 3 Israel Pinto 2016-02-02 16:12:42 UTC
Created attachment 1120492 [details]
vdsm_log_2

Comment 4 Israel Pinto 2016-02-02 16:13:09 UTC
Created attachment 1120493 [details]
test_log

Comment 5 Martin Betak 2016-02-03 17:07:35 UTC
Hi can you please also provide the server.log? It usually contains more detailed error messages.

Comment 6 Israel Pinto 2016-02-03 18:42:13 UTC
Created attachment 1120890 [details]
server_log

Comment 7 Martin Betak 2016-02-04 10:19:24 UTC
Israel,

in the server.log I see a lot of occurrences of BZ#1261877 which is supposed to be fixed in 3.6.0. Are you sure your version is 3.6.3?

Thank you

Martin

Comment 8 Israel Pinto 2016-02-04 10:22:55 UTC
yes:
Version-Release number of selected component (if applicable):
RHEVM Version: 3.6.3-0.1.el6 
Host: 
RHEV Hypervisor - 7.2 - 20160126.0.el7ev
vdsm-4.17.18-0.el7ev
libvirt-1.2.17-13.el7_2.2

Comment 9 Martin Polednik 2016-02-04 14:35:46 UTC
Please provide an output of following commands:

virsh -r nodedev-list
vdsClient -s 0 hostdevListByCaps
lspci

on both hosts.

Comment 10 Israel Pinto 2016-02-10 12:58:16 UTC
Created attachment 1122784 [details]
host_output

Comment 11 Martin Polednik 2016-02-22 13:11:35 UTC
Although engine/server logs point to something similar to bz#1306333, the logs provided from the host are fine. Either something changed between the logs or the problematic devices disappeared.

Comment 12 Martin Betak 2016-02-22 14:09:32 UTC
Indeed the host device (tracked in different bug) seems unrelated to this case. This new "behavior" of migration/maintenance interaction seems to be a consequence of 3.6 changes to Vm/Host monitoring. Investigating further ...

Comment 13 Martin Betak 2016-03-03 18:04:43 UTC
Israel: I tried to reproduce this migration/maintenance scenario by mass migrating 10 VMs from source host to destination and putting the destination to maintenance mode immediately after. 

This resulted in the following:

1) some VMs managed to migrate sucessfully
2) some migrations were cancelled
3) A message was output into logs saying: 
"Migration failed  while Host is in 'preparing for maintenance' state.
  Consider manual intervention: stopping/migrating Vms as Host's state will not
  turn to maintenance while VMs are still running on it."
4) Performed "manual intervention" as suggested by migrating VMs that managed to migrate back to source
5) Upon successful completion of 4) the *Destination* went to maintenance mode automatically.
 
Maybe the only shortcoming of this is that the engine did not try to re-issue migrations from the preparing-for-maintenance destination but from the observed log messages and overall system behaviour I wouldn't say this behaviour is necessarily a bug.

Comment 14 Israel Pinto 2016-03-03 19:10:12 UTC
Martin:
We have here case that VM is on Host_1 and we migrate it to Host_2
After migration done, we put Host_2 to maintenance. Host_1 is now the destination BUT probably the destroy of the VM did not finish yet on this host (Host_1) this way we get the message already running on destination.
This case it rare since it done automation testing.

But it can be in case where we need to do migration of one/more VM in loop between 2 Hosts without delay. 

We start seeing this failure after the fix in the destroy of VM which stay as external after it deleted

Comment 15 Tomas Jelinek 2016-03-08 14:23:25 UTC
This is a very rare case and does not cause any harm (e.g. does not affect the VM).
It is a consequence of the fact that after the migration finished engine needs to send the destroy VM command to VDSM. This can race with new migration command.

Comment 16 Red Hat Bugzilla Rules Engine 2016-03-08 14:23:31 UTC
This request has been proposed for two releases. This is invalid flag usage. The ovirt-future release flag has been cleared. If you wish to change the release flag, you must clear one release flag and then set the other release flag to ?.

Comment 17 Red Hat Bugzilla Rules Engine 2016-03-08 14:23:31 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 18 Michal Skrivanek 2016-11-23 12:57:42 UTC
best would be to work around that in automation and try again. it's very unlikely to happen in real world