+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1488526 +++ ====================================================================== Description of problem: stop_pinned_to_host_vms parameter doesn't work when VM has set two specific host and migration mode to not allow migration Version-Release number of selected component (if applicable): ansible-2.3.1.0-3.el7.noarch ovirt-ansible-roles-1.0.1-1.el7ev.noarch Environment: vars: stop_pinned_to_host_vms: true - two host - VM with specified both hosts and migration mode to do not allow any migration Actual results: role will not stop VM failed upgrade of that host Expected results: should stop VM in any time when VM is pinned or migration mode is allow manual migration (and when stop_pinned_to_host_vms: true) The full traceback is: Traceback (most recent call last): File "/tmp/ansible_aCFv8D/ansible_module_ovirt_hosts.py", line 390, in main fail_condition=failed_state, File "/tmp/ansible_aCFv8D/ansible_modlib.zip/ansible/module_utils/ovirt.py", line 723, in action getattr(entity_service, action)(**kwargs) File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 30263, in upgrade return self._internal_action(action, 'upgrade', None, headers, query, wait) File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 290, in _internal_action return future.wait() if wait else future File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 53, in wait return self._code(response) File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 287, in callback self._check_fault(response) File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 125, in _check_fault self._raise_error(response, body.fault) File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 109, in _raise_error raise error Error: Fault reason is "Operation Failed". Fault detail is "[Cannot switch the following Hosts to Maintenance mode: a-host-01. One or more running VMs are indicated as non-migratable. The non-migratable VMs are: vm-01.]". HTTP response code is 409. fatal: [**FILTERED**]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "address": null, "cluster": null, "comment": null, "fetch_nested": false, "force": false, "hosted_engine": null, "kdump_integration": null, "kernel_params": null, "name": "a-host-01", "nested_attributes": [], "override_display": null, "override_iptables": null, "password": null, "poll_interval": 3, "public_key": false, "spm_priority": null, "state": "upgraded", "timeout": 1200, "wait": true } }, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot switch the following Hosts to Maintenance mode: a-host-01.\nOne or more running VMs are indicated as non-migratable. The non-migratable VMs are: vm-01.]\". HTTP response code is 409." } (Originally by Petr Kubica)
There is introduced new parameter called stop_non_migratable_vms. The old stop_pinned_to_host_vms, is an alias to stop_non_migratable_vms variable. (Originally by Ondra Machacek)
Please move this to oVirt (Originally by ylavi)
Verified in ansible-2.3.2.0-2.el7.noarch ovirt-ansible-roles-1.0.4-1.el7ev.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3137