Description of problem: using ansible-playbook directly, not the installer wizard, and localhost as one of the nodes for an ha master deployment. the install fails. the inventory file contains - [etcd] localhost openshift_hostname="{{ ansible_default_ipv4.address }}" 192.1.0.[4:5] openshift_hostname="{{ ansible_default_ipv4.address}}" localhost is 192.1.0.3 and this bug does no occur when changing the inventory to - [etcd] 192.1.0.[3:5] openshift_hostname="{{ ansible_default_ipv4.address}}" Version-Release number of selected component (if applicable): 3.0.20-1.git.0.3703f1b.el7aos.noarch How reproducible: 100% Steps to Reproduce: 1. create inventory w/ localhost as above 2. ansible-playbook -i inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml 3. there is no 3 Actual results: TASK: [Retrieve the etcd cert tarballs] *************************************** fatal: [localhost] => Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line 586, in _executor exec_rc = self._executor_internal(host, new_stdin) File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line 815, in _executor_internal complex_args=complex_args File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line 1036, in _executor_internal_inner result = handler.run(conn, tmp, module_name, module_args, inject, complex_args) File "/usr/lib/python2.7/site-packages/ansible/runner/action_plugins/fetch.py", line 147, in run f = open(dest, 'w') IOError: [Errno 13] Permission denied: u'/tmp/openshift-ansible-S0CPusr/etcd-192.1.0.3.tgz' FATAL: all hosts have already failed -- aborting Expected results: success
Hi Matthew, what user are you running ansible as? I'm curious if we should use 'become' here to escalate privileges. I'll summon Jason.
username was cloud-user, which had sudo access
Can you provide the following: - version of ansible - full inventory file(s) - the ansible.cfg file if you have made any changes... (lookup path is /ansible.cfg, ~/.ansible.cfg, /etc/ansible.cfg) - The permissions and SELinux contexts of '/tmp/openshift-ansible-S0CPusr/' and '/etc/etcd/generated_certs/etcd-192.1.0.3.tgz' - The error when running ansible-playbook with -vvvv When running under localhost, the task that failed *should* work, since it should be running under sudo already, otherwise the tarball creation would have failed, since it's created under /etc/etcd/generated_certs/ and cloud-user wouldn't have permission. There could be some oddities around our use of the fetch module or SELinux preventing us from accessing the temp directory created without sudo.
jason, unfortunately i don't have the environment anymore.