Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1632257

Summary: [ansible-playbook cluster-upgrade] Hosts upgrade resets the scheduling policy values
Product: [oVirt] ovirt-ansible-collection Reporter: Nicolas Ecarnot <nicolas>
Component: cluster-upgradeAssignee: Ondra Machacek <omachace>
Status: CLOSED CURRENTRELEASE QA Contact: Lukas Svaty <lsvaty>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 1.0.0CC: bugs, mperina, nicolas, omachace
Target Milestone: ovirt-4.2.8Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ovirt-ansible-cluster-upgrade-1.1.10 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-22 10:23:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nicolas Ecarnot 2018-09-24 13:29:08 UTC
Description of problem:
After a successful use of Ansible's cluster-upgrade, the scheduling policy figures are reset to default values.
Using vm_evenly_distributed policy, the figures of HighVmCount/SpmVmGrace/MigrationThreshold were 4/1/2.
After the upgrade, they're back to 10/5/5.

Version-Release number of selected component (if applicable):
- oVirt 4.2.6.4-1.el7
- ovirt-ansible-cluster-upgrade-1.1.7-1.el7.centos.noarch

How reproducible:
Always

Steps to Reproduce:
1. Check the present scheduler settings
2. Run ansible-playbook /root/cluster_upgrade.yml and wait until success
3. Check the new scheduler settings

Actual results:
The scheduler settings have changed.

Expected results:
Thses figures should not change.

Comment 1 Nicolas Ecarnot 2018-10-17 07:34:54 UTC
Hi Ondra,

It looks like the issue is in tasks/cluster_policy.yml, where the previous policy name is remembered, but not the values.

I'd like to help, but my I'm not skilled enough with Ansible.

Can I test or help in some ways?

Comment 3 Nicolas Ecarnot 2018-11-04 22:13:52 UTC
Hi Ondra,

Thank you for your PR.
I've just upgraded to 4.2.7, then installed your new RPM, then tried it.

At the end of the run, when the ansible recipe tries to set the parameters back, it fails with the following message :

TASK [oVirt.cluster-upgrade : Set original cluster policy] **********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ovirt_clusters) module: custom_scheduling_policy_properties Supported parameters include: auth, ballooning, comment, compatibility_version, cpu_arch, cpu_type, data_center, description, external_network_providers, fence_connectivity_threshold, fence_enabled, fence_skip_if_connectivity_broken, fence_skip_if_sd_active, fetch_nested, gluster, ha_reservation, host_reason, ksm, ksm_numa, mac_pool, memory_policy, migration_auto_converge, migration_bandwidth, migration_bandwidth_limit, migration_compressed, migration_policy, name, nested_attributes, network, poll_interval, resilience_policy, rng_sources, scheduling_policy, scheduling_policy_properties, serial_policy, serial_policy_value, spice_proxy, state, switch_type, threads_as_cores, timeout, trusted_service, virt, vm_reason, wait"}

Comment 4 Ondra Machacek 2018-11-05 10:29:52 UTC
Right, I wonder how it could work for me, I've fixed it, so if you can re-verify, would be very cool:

https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-cluster-upgrade_standard-check-pr/43/artifact/build-artifacts.el7.x86_64/ovirt-ansible-cluster-upgrade-1.1.9-0.1.master.20181105095815.el7.noarch.rpm

Comment 5 Nicolas Ecarnot 2018-11-09 08:18:31 UTC
(In reply to Ondra Machacek from comment #4)
> Right, I wonder how it could work for me, I've fixed it, so if you can
> re-verify, would be very cool:
> 
> https://jenkins.ovirt.org/job/oVirt_ovirt-ansible-cluster-upgrade_standard-
> check-pr/43/artifact/build-artifacts.el7.x86_64/ovirt-ansible-cluster-
> upgrade-1.1.9-0.1.master.20181105095815.el7.noarch.rpm

Hello Ondra,

Thank you for this patch.
I've just tested it, and I witness the values are correctly set back, but to be honest, it this point there is no hosts needing any updates.
I'm pretty sure that won't change the behavior of your Ansible recipe, and it will run the same way after having actually updated hosts, but I wanted to be clear.

Before closing this bug, do you prefer to wait for me for further updates in which case I'll run a final test?

Comment 6 Nicolas Ecarnot 2018-11-12 10:32:08 UTC
Hello Ondra,

This morning my hosts were needing upgrades, so I ran your Ansible script.
The cluster scheduling values were correctly restored.

So you can close this bug.

Thank you for your help.

Have a nice day.

-- 
Nico

Comment 7 Lukas Svaty 2018-11-12 11:31:13 UTC
Thanks, Nicolas, for your contribution!

Comment 8 Sandro Bonazzola 2019-01-22 10:23:14 UTC
This bugzilla is included in oVirt 4.2.8 release, published on January 22nd 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.2.8 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.