Bug 1594385 - Ansible issue on deployment due to permission failure with mistral
Summary: Ansible issue on deployment due to permission failure with mistral
Keywords:
Status: CLOSED DUPLICATE of bug 1593345
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 14.0 (Rocky)
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Emilien Macchi
QA Contact: Gurenko Alex
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-22 19:26 UTC by Dimitri Savineau
Modified: 2018-06-22 21:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-22 21:00:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dimitri Savineau 2018-06-22 19:26:53 UTC
Description of problem:
With the latest OSP14 puddle the deployment part done via ansible (after the heat stack creation) fails because ansible is not able to create temporary directory. By default ansible will create the temporary directory in the home of the current user.
In this case, it's mistral user that runs the ansible command but there's not home directory existing for this user (/home/mistral). As a result, the ansible command fails on the first task.

Version-Release number of selected component (if applicable):
OSP14 puddle 2018-06-19.4 (latest)

How reproducible:
100%

Steps to Reproduce:
1. Deploy the overcloud

Actual results:
2018-06-22 10:40:04Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

 Stack overcloud/afeb8faf-1c02-4f07-be0f-394c79f1d861 CREATE_COMPLETE 

Deploying overcloud configuration
Enabling ssh admin (tripleo-admin) for hosts:
192.168.24.18 192.168.24.6 192.168.24.11
Using ssh user heat-admin for initial connection.
Using ssh key at /home/stack/.ssh/id_rsa for initial connection.
Inserting TripleO short term key for 192.168.24.18
Inserting TripleO short term key for 192.168.24.6
Inserting TripleO short term key for 192.168.24.11
Starting ssh admin enablement workflow
ssh admin enablement workflow - RUNNING.
ssh admin enablement workflow - RUNNING.
ssh admin enablement workflow - RUNNING.
ssh admin enablement workflow - COMPLETE.
Removing TripleO short term key from 192.168.24.18
Removing TripleO short term key from 192.168.24.6
Removing TripleO short term key from 192.168.24.11
Removing short term keys locally
Enabling ssh admin - COMPLETE.
Config downloaded at /var/lib/mistral/e9889a74-33e2-4c5e-a9f7-275e3bbae18f
Inventory generated at /var/lib/mistral/e9889a74-33e2-4c5e-a9f7-275e3bbae18f/tripleo-ansible-inventory.yaml
Running ansible playbook at /var/lib/mistral/e9889a74-33e2-4c5e-a9f7-275e3bbae18f/deploy_steps_playbook.yaml. See log file at /var/lib/mistral/e9889a74-33e2-4c5e-a9f7-275e3bbae18f/ansible.log for progress. ...

Using /var/lib/mistral/e9889a74-33e2-4c5e-a9f7-275e3bbae18f/ansible.cfg as config file

PLAY [Gather facts from undercloud] ********************************************

TASK [Gathering Facts] *********************************************************
fatal: [undercloud]: UNREACHABLE! => {"changed": false, "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /home/mistral/.ansible/tmp/ansible-tmp-1529664117.85-112525428661802 `\" && echo ansible-tmp-1529664117.85-112525428661802=\"` echo /home/mistral/.ansible/tmp/ansible-tmp-1529664117.85-112525428661802 `\" ), exited with result 1", "unreachable": true}

PLAY RECAP *********************************************************************
undercloud                 : ok=0    changed=0    unreachable=1    failed=0   


Ansible failed, check log at /var/lib/mistral/e9889a74-33e2-4c5e-a9f7-275e3bbae18f/ansible.log.

Expected results:
The deployment runs successfully

Additional info:
This issue was not present in the previous puddle release (2018-06-13.2).
In the changelog I can see that the ansible version has been bumped from ansible-2.4.3.0-1.el7ae to ansible-2.5.4-1.el7ae which could be the root cause of this issue.
Using /tmp for the remote_tmp ansible configuration seems to solve the issue.

# ansible.cfg
[defaults]
remote_tmp = /tmp/.ansible-${USER}/tmp

Comment 1 Alex Schultz 2018-06-22 21:00:58 UTC

*** This bug has been marked as a duplicate of bug 1593345 ***


Note You need to log in before you can comment on or make changes to this bug.