Bug 1492486 - [downstream clone - 4.1.7] [ovirt-ansible-roles] ovirt-cluster-upgrade: stop_pinned_to_host_vms parameter doesn't work when VM has set two specific host and migration mode to not allow
Summary: [downstream clone - 4.1.7] [ovirt-ansible-roles] ovirt-cluster-upgrade: stop_...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-ansible-roles
Version: 4.1.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.1.7
: ---
Assignee: Ondra Machacek
QA Contact: Petr Kubica
URL:
Whiteboard:
Depends On: 1488526
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-17 19:40 UTC by rhev-integ
Modified: 2017-11-07 17:27 UTC (History)
5 users (show)

Fixed In Version: ovirt-ansible-roles-1.0.4-1.el7ev
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of: 1488526
Environment:
Last Closed: 2017-11-07 17:27:31 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3137 0 normal SHIPPED_LIVE 4.1.7 - ovirt-ansible-roles bug fix and enhancement update 2017-11-07 22:22:25 UTC

Description rhev-integ 2017-09-17 19:40:16 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1488526 +++
======================================================================

Description of problem:
stop_pinned_to_host_vms parameter doesn't work when VM has set two specific host and migration mode to not allow migration

Version-Release number of selected component (if applicable):
ansible-2.3.1.0-3.el7.noarch
ovirt-ansible-roles-1.0.1-1.el7ev.noarch

Environment:
vars:
  stop_pinned_to_host_vms: true

- two host
- VM with specified both hosts and migration mode to do not allow any migration

Actual results:
role will not stop VM
failed upgrade of that host

Expected results:
should stop VM in any time when VM is pinned or migration mode is allow manual migration (and when stop_pinned_to_host_vms: true)

The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_aCFv8D/ansible_module_ovirt_hosts.py", line 390, in main
    fail_condition=failed_state,
  File "/tmp/ansible_aCFv8D/ansible_modlib.zip/ansible/module_utils/ovirt.py", line 723, in action
    getattr(entity_service, action)(**kwargs)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 30263, in upgrade
    return self._internal_action(action, 'upgrade', None, headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 290, in _internal_action
    return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 53, in wait
    return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 287, in callback
    self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 125, in _check_fault
    self._raise_error(response, body.fault)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 109, in _raise_error
    raise error
Error: Fault reason is "Operation Failed". Fault detail is "[Cannot switch the following Hosts to Maintenance mode: a-host-01.
One or more running VMs are indicated as non-migratable. The non-migratable VMs are: vm-01.]". HTTP response code is 409.

fatal: [**FILTERED**]: FAILED! => {
    "changed": false, 
    "failed": true, 
    "invocation": {
        "module_args": {
            "address": null, 
            "cluster": null, 
            "comment": null, 
            "fetch_nested": false, 
            "force": false, 
            "hosted_engine": null, 
            "kdump_integration": null, 
            "kernel_params": null, 
            "name": "a-host-01", 
            "nested_attributes": [], 
            "override_display": null, 
            "override_iptables": null, 
            "password": null, 
            "poll_interval": 3, 
            "public_key": false, 
            "spm_priority": null, 
            "state": "upgraded", 
            "timeout": 1200, 
            "wait": true
        }
    }, 
    "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot switch the following Hosts to Maintenance mode: a-host-01.\nOne or more running VMs are indicated as non-migratable. The non-migratable VMs are: vm-01.]\". HTTP response code is 409."
}

(Originally by Petr Kubica)

Comment 1 rhev-integ 2017-09-17 19:40:20 UTC
There is introduced new parameter called stop_non_migratable_vms. The old stop_pinned_to_host_vms, is an alias to stop_non_migratable_vms variable.

(Originally by Ondra Machacek)

Comment 2 rhev-integ 2017-09-17 19:40:23 UTC
Please move this to oVirt

(Originally by ylavi)

Comment 4 Petr Kubica 2017-10-17 14:54:15 UTC
Verified in 
ansible-2.3.2.0-2.el7.noarch
ovirt-ansible-roles-1.0.4-1.el7ev.noarch

Comment 6 errata-xmlrpc 2017-11-07 17:27:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3137


Note You need to log in before you can comment on or make changes to this bug.