Description of problem: When Updating All Hosts in a Cluster in the RHV Adminstration Portal [1], the first option is to "Stop Pinned VMs." You are given the option to skip a host for upgrade, but even if you do, VMs pinned to that host will be stopped. All pinned VMs are stopped, even on hosts that are skipped. Version-Release number of selected component (if applicable): ovirt-engine-4.4.9.5-0.1.el8ev.noarch ovirt-ansible-collection-1.6.5-1.el8ev.noarch How reproducible: Every time. Steps to Reproduce: 1. Pin a VM on each of 3 hosts. 2. In the Administration Portal, click Compute → Clusters and select the cluster. 3. Click Upgrade. 4. Select TWO of the THREE hosts to update, then click Next. 5. Configure the upgrade to "Stop Pinned VMs." 6. Configure any other options and click Next. 7. Review the summary of the hosts and virtual machines that will be affected. 8. Click Upgrade. Actual results: All pinned VMs will be stopped, even on the host that was skipped. Expected results: Only pinned VMs on the hosts being updated would be stopped. Additional info: References: [1] Updating All Hosts in a Cluster > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/upgrade_guide/index#Updating_all_hosts_in_a_cluster_minor_updates
I suspect this could be addressed in the upgrade.yml task file by: 1. register the results of the "Get list of VMs in host" task: > https://github.com/oVirt/ovirt-ansible-collection/blob/98e2db3c4e42a00d0213a3dbfbb199befd03f623/roles/cluster_upgrade/tasks/upgrade.yml#L1 2. incorporate that register when evaluating the conditional in the "Shutdown non-migratable VMs" task: > https://github.com/oVirt/ovirt-ansible-collection/blob/98e2db3c4e42a00d0213a3dbfbb199befd03f623/roles/cluster_upgrade/tasks/upgrade.yml#L30 I think it currently only uses the "vms_in_cluster.ovirt_vms" register which was set in the "Get list of VMs in cluster" task in the main.yml task file: > https://github.com/oVirt/ovirt-ansible-collection/blob/98e2db3c4e42a00d0213a3dbfbb199befd03f623/roles/cluster_upgrade/tasks/main.yml#L103 Having said all that, I do not currently have a way to test and so I apologize.
*** Bug 2055901 has been marked as a duplicate of this bug. ***
Verification failed in ovirt-ansible-collection-2.0.0-0.6.BETA.el8ev.noarch Reproduction steps are the same as in description. Only one host is upgraded then the upgrade process will fail. Pinned VMs remain shutdown. 2022-05-05 10:44:14 UTC - TASK [ovirt.ovirt.cluster_upgrade : Create list of VM names which have been shut down] *** 2022-05-05 10:44:14 UTC - fatal: [localhost]: FAILED! => {"msg": "Unexpected templating type error occurred on ({{ pinned_vms_names + pinned_to_host_vms.results | selectattr('changed') | map(attribute='host_name') | list }}): must be str, not list"} 2022-05-05 10:44:14 UTC - { "status" : "OK", "msg" : "", "data" : { "uuid" : "816c615a-990d-4e65-a014-c43b57e6790c", "counter" : 231, "stdout" : "fatal: [localhost]: FAILED! => {\"msg\": \"Unexpected templating type error occurred on ({{ pinned_vms_names + pinned_to_host_vms.results | selectattr('changed') | map(attribute='host_name') | list }}): must be str, not list\"}", "start_line" : 205, "end_line" : 206, "runner_ident" : "71f16550-cc5e-11ec-9a23-001a4aa00900", "event" : "runner_on_failed", "pid" : 1515254, "created" : "2022-05-05T10:44:13.410069", "parent_uuid" : "001a4aa0-0900-9262-0f79-000000000510", "event_data" : { "playbook" : "ovirt-cluster-upgrade.yml", "playbook_uuid" : "46f2ec3d-1c95-4c42-8c18-71473c627eb4", "play" : "oVirt cluster upgrade wizard target", "play_uuid" : "001a4aa0-0900-9262-0f79-000000000008", "play_pattern" : "localhost", "task" : "Create list of VM names which have been shut down", "task_uuid" : "001a4aa0-0900-9262-0f79-000000000510", "task_action" : "set_fact", "task_args" : "", "task_path" : "/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/cluster_upgrade/tasks/upgrade.yml:60", "role" : "cluster_upgrade", "host" : "localhost", "remote_addr" : "127.0.0.1", "res" : { "msg" : "Unexpected templating type error occurred on ({{ pinned_vms_names + pinned_to_host_vms.results | selectattr('changed') | map(attribute='host_name') | list }}): must be str, not list", "_ansible_no_log" : false }, "start" : "2022-05-05T10:44:13.375809", "end" : "2022-05-05T10:44:13.409798", "duration" : 0.033989, "ignore_errors" : null, "event_loop" : null, "uuid" : "816c615a-990d-4e65-a014-c43b57e6790c" } } }
(In reply to Barbora Dolezalova from comment #14) > Verification failed in ovirt-ansible-collection-2.0.0-0.6.BETA.el8ev.noarch Could you please retest with ovirt-ansible-collection-2.0.3-1.el8ev which is the correct version shipped to customers in RHV 4.4 SP1?
(In reply to Martin Perina from comment #15) > (In reply to Barbora Dolezalova from comment #14) > > Verification failed in ovirt-ansible-collection-2.0.0-0.6.BETA.el8ev.noarch > > Could you please retest with ovirt-ansible-collection-2.0.3-1.el8ev which is > the correct version shipped to customers in RHV 4.4 SP1? Retested with the same result. # rpm -qa ovirt-ansible-collection ovirt-ansible-collection-2.0.3-1.el8ev.noarch
OK, so in that case we need to retarget to 4.5.1. it's too late to fix in 4.5.0-1
Verified in ovirt-ansible-collection-2.2.1-1.el8ev.noarch. When upgrading only pinned VMs were stopped on the selected hosts.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV Engine and Host Common Packages update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:6394