Description of problem: If role will mark one host as failed, the role now will start with next host and it will migrate the VMs to reamining host. So there at the moment will be two host "down" instead of one (this host can timeout due to amount of migrations and the role will start upgrade of the next host) Version-Release number of selected component (if applicable): ovirt-ansible-cluster-upgrade-1.1.3-1.el7ev.noarch ansible-2.4.1.0-1.el7ae.noarch
failed QA steps: 1) have host with older repositories 2) run check for upgrade (manually from engine ui) 3) remove repositories / subscriptions on the alphabetically first host 4) run ansible cluster upgrade role - installation of that host will fail but ansible continued to upgrade next host version: ovirt-ansible-cluster-upgrade-1.1.5-1.el7ev.noarch ansible-2.5.0-0.3.rc1.el7ae.noarch
verified: ovirt-ansible-cluster-upgrade-1.1.6-1.el7ev.noarch
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.