Description of problem: playbook: ... users: - name: john.doe authz_name: internal-authz password: 123456 valid_to: "2020-01-01 00:00:00Z" ... ASK [oVirt.infra/roles/oVirt.aaa-jdbc : Manage internal users] ***************************************** task path: /usr/share/ansible/roles/oVirt.infra/roles/oVirt.aaa-jdbc/tasks/main.yml:5 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: pkubica <localhost> EXEC /bin/sh -c 'echo ~pkubica && sleep 0' <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041 `" && echo ansible-tmp-1539260954.59-223638909789041="` echo /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041 `" ) && sleep 0' Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py <localhost> PUT /home/pkubica/.ansible/tmp/ansible-local-6406gYcxoO/tmpdAdCOF TO /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041/AnsiballZ_command.py <localhost> EXEC /bin/sh -c 'chmod u+x /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041/ /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /home/pkubica/.ansible/tmp/ansible-tmp-1539260954.59-223638909789041/ > /dev/null 2>&1 && sleep 0' The full traceback is: WARNING: The below traceback may *not* be related to the actual failure. File "/tmp/ansible_command_payload_yeAySR/ansible_command_payload.zip/ansible/module_utils/basic.py", line 2839, in run_command cmd = subprocess.Popen(args, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception failed: [localhost] (item={u'password': 123456, u'authz_name': u'internal-authz', u'name': u'john.doe', u'valid_to': u'2018-01-01 00:00:00Z'}) => { "changed": true, "cmd": "/usr/bin/ovirt-aaa-jdbc-tool user add john.doe", "failed_when_result": true, "invocation": { "module_args": { "_raw_params": "/usr/bin/ovirt-aaa-jdbc-tool user add john.doe", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "authz_name": "internal-authz", "name": "john.doe", "password": 123456, "valid_to": "2018-01-01 00:00:00Z" }, "msg": "[Errno 2] No such file or directory", "rc": 2 } Version-Release number of selected component (if applicable): ovirt-ansible-infra-1.1.9-0.1.master.20181010120045.el7.noarch How reproducible: always Steps to Reproduce: 1. create users via infra role Actual results: can't add users via infra role
You didn't execute this on engine obviously. So we must document this must be done one engine.
This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
This is a documentation change only and it cannot be regression, because the role has been able to create users/roles using aaa-jdbc when it was not executed on engine host
Fix isn't included in master 4.3 build ovirt-ansible-infra-1.1.10-1.el7.noarch
Did you check also examples [1]? [1] https://github.com/oVirt/ovirt-ansible-infra/blob/d9b818927946d89bc2bb8740b4afb86611225bc5/examples/ovirt_infra.yml#L3
The roles are meant to be executed on engine, and RPMs are shipped only there, so it means localhost, so examples are OK. Currently if the machine don't have aaa-jdbc installed the role will fail to setup internal users/groups.
Verified in ovirt-ansible-infra-1.1.12-0.1.master.20190117095036.el7.noarch
This bugzilla is included in oVirt 4.3.0 release, published on February 4th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.