Description of problem: A VM was started with i/o threads disabled. engine=# select vm_name,num_of_io_threads from vm_static where vm_guid in (select vm_guid from vm_static where vm_name='test_vm'); vm_name | num_of_io_threads ---------+------------------- test_vm | 0 (1 row) Then the i/o threads value was changed to 10 with VM running. However, this will only reflect in nex_run configuration and not in vm_static. === engine=# select vm_name,num_of_io_threads from vm_static where vm_guid in (select vm_guid from vm_static where vm_name='test_vm'); vm_name | num_of_io_threads ---------+------------------- test_vm | 0 (1 row) vm_configuration in next_run snapshot table <IsSmartcardEnabled>false</IsSmart cardEnabled><NumOfIoThreads>10</NumOfIoThreads> === Now if I try to disable the i/o threads using ovirt_vm module, it will not send "change" request to the manager. === - name: change i/o thread ovirt_vm: state: running auth: "{{ ovirt_auth }}" name: "{{ item }}" io_threads: 0 with_items: - test_vm PLAY [update VM conf] ******************************************************************************************************************************************************* TASK [Obtain SSO token] ***************************************************************************************************************************************************** ok: [localhost] TASK [change i/o thread] **************************************************************************************************************************************************** ok: [localhost] => (item=test_vm) PLAY RECAP ****************************************************************************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 === This is because of the "update_check" in the ovirt_vm module is returning "True". It will first check the current VM configuration and it will return "io_threads" as 0 since it's only in next_run configuration. === curl --insecure --user 'admin@internal:RedHat1!' --request GET --header 'Version: 4' --header 'Content-Type: application/xml' --header 'Accept: application/xml' https://manager/ovirt-engine/api/vms/a1116385-1543-4f11-83bb-bea1a3210511 |grep -A4 -B4 threads <io> <threads>0</threads> </io> ovirt_vm.py 1202 def update_check(self, entity): 1203 def check_cpu_pinning(): 1204 if self.param('cpu_pinning'): 1205 current = [] ------------- -------------- 1246 equal(self.param('smartcard_enabled'), getattr(vm_display, 'smartcard_enabled', False)) and 1247 equal(self.param('io_threads'), entity.io.threads) and => this will match and hence returns True 1248 equal(self.param('ballooning_enabled'), entity.memory_policy.ballooning) and 1249 equal(self.param('serial_console'), entity.console.enabled) and ==== So the module will not send a change request to the manager. Version-Release number of selected component (if applicable): ansible 2.7.0 rhvm-4.2.6.4-0.1.el7ev.noarch How reproducible: 100 % Steps to Reproduce: [1] Start a VM with i/o threads disabled. [2] Change the i/o threads to 10. [3] Try to disable it back using ovirt_vm module. Actual results: The ovirt_vm module is not sending a change request to the manager if a next_run configuration was reverted back to original Expected results: The ovirt_vm module should allow reverting the change. Additional info:
Targeting to 4.2.8, but the fix will be part of Ansible and not RHV, so as soon we will know in which Ansible version this will be fixed, we will update the bug
The change on running VM can be overwritten by ovirt_vm ansible module verified in ansible-2.7.1-1.el7ae.noarch
The fix is included in ansible-2.7.1-1.el7ae delivered by https://errata.devel.redhat.com/advisory/37618
sync2jira