Bug 1342028 - user can not make use of environment variables to set openshift_cloudprovider_aws_{access,secret}_key within the ansible inventory
Summary: user can not make use of environment variables to set openshift_cloudprovider...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: 3.3.1
Assignee: Andrew Butcher
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks: 1381710
TreeView+ depends on / blocked
 
Reported: 2016-06-02 09:53 UTC by Johnny Liu
Modified: 2016-10-27 16:12 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Previously, environment variable lookups and other variable expansion within the ansible inventory would not be correctly interpreted. These variables are now interpreted correctly, for example: openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" causes the AWS_ACCESS_KEY_ID environment variable to be set as the AWS cloud provider access key.
Clone Of:
: 1381710 (view as bug list)
Environment:
Last Closed: 2016-10-27 16:12:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2122 0 normal SHIPPED_LIVE OpenShift Container Platform atomic-openshift-utils bug fix update 2016-10-27 20:11:30 UTC

Description Johnny Liu 2016-06-02 09:53:22 UTC
Description of problem:
Following https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example, make use of environment variables to set openshift_cloudprovider_aws_{access,secret}_key var like the following:
openshift_cloudprovider_kind=aws
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"

Run installation, failed, because invalid parameters are set in master config file.
# cat /etc/sysconfig/atomic-openshift-master
OPTIONS=--loglevel=5
CONFIG_FILE=/etc/origin/master/master-config.yaml

AWS_ACCESS_KEY_ID={# lookup('env','AWS_ACCESS_KEY_ID') #}
AWS_SECRET_ACCESS_KEY={# lookup('env','AWS_SECRET_ACCESS_KEY') #}


<---ansible installation log--->
ASK: [openshift_cloud_provider | Set cloud provider facts] ******************* 
changed: [ec2-54-208-74-163.compute-1.amazonaws.com] => {"ansible_facts": {"openshift": {"cloudprovider": {"aws": {"access_key": "{# lookup('env','AWS_ACCESS_KEY_ID') #}", "secret_key": "{# lookup('env','AWS_SECRET_ACCESS_KEY') #}"}, "kind": "aws"},
<---ansible installation log--->


Version-Release number of selected component (if applicable):
openshift-ansible-3.0.94-1.git.0.67a822a.el7.noarch.rpm
openshift-ansible-docs-3.0.94-1.git.0.67a822a.el7.noarch.rpm
openshift-ansible-filter-plugins-3.0.94-1.git.0.67a822a.el7.noarch.rpm
openshift-ansible-lookup-plugins-3.0.94-1.git.0.67a822a.el7.noarch.rpm
openshift-ansible-playbooks-3.0.94-1.git.0.67a822a.el7.noarch.rpm
openshift-ansible-roles-3.0.94-1.git.0.67a822a.el7.noarch.rpm

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Brenton Leanhardt 2016-06-09 20:11:22 UTC
Can you verify the variables are set?

Comment 2 Johnny Liu 2016-06-12 03:17:46 UTC
(In reply to Brenton Leanhardt from comment #1)
> Can you verify the variables are set?

Yes, I set it in my localhost where ansible playbook is running. And I verify it works well using my own debugging playbook, 'AWS_ACCESS_KEY_ID' and 'AWS_SECRET_ACCESS_KEY' is printed out successfully.

$ cat debug-playbook.yaml
---
- name: Debugging
  hosts: masters[0]
  tasks:
  - set_fact:
      aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"
      aws_secret_key: "{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"

  - debug: var=aws_access_key

Review my initial report, even if I did not set variables, AWS_ACCESS_KEY_ID should be set to empty, but not {# lookup('env','AWS_ACCESS_KEY_ID') #}.

Comment 4 Andrew Butcher 2016-08-09 14:38:50 UTC
Extended the playbook above to output secret key. Works in my testing with ansible-2.1.0.0-1.


$ AWS_ACCESS_KEY_ID=hue AWS_SECRET_ACCESS_KEY=hae ansible-playbook ~/debug.yml

PLAY [Debugging] ***************************************************************

TASK [setup] *******************************************************************
Tuesday 09 August 2016  10:37:23 -0400 (0:00:00.039)       0:00:00.040 ******** 
ok: [localhost]

TASK [set_fact] ****************************************************************
Tuesday 09 August 2016  10:37:28 -0400 (0:00:05.884)       0:00:05.924 ******** 
ok: [localhost]

TASK [debug] *******************************************************************
Tuesday 09 August 2016  10:37:28 -0400 (0:00:00.046)       0:00:05.970 ******** 
ok: [localhost] => {
    "aws_access_key": "hue"
}

TASK [debug] *******************************************************************
Tuesday 09 August 2016  10:37:29 -0400 (0:00:00.042)       0:00:06.013 ******** 
ok: [localhost] => {
    "aws_secret_key": "hae"
}

PLAY RECAP *********************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=0

Comment 5 Johnny Liu 2016-08-11 03:13:34 UTC
(In reply to Andrew Butcher from comment #4)
> Extended the playbook above to output secret key. Works in my testing with
> ansible-2.1.0.0-1.
> 
> 
> $ AWS_ACCESS_KEY_ID=hue AWS_SECRET_ACCESS_KEY=hae ansible-playbook
> ~/debug.yml
> 
> PLAY [Debugging]
> ***************************************************************
> 
> TASK [setup]
> *******************************************************************
> Tuesday 09 August 2016  10:37:23 -0400 (0:00:00.039)       0:00:00.040
> ******** 
> ok: [localhost]
> 
> TASK [set_fact]
> ****************************************************************
> Tuesday 09 August 2016  10:37:28 -0400 (0:00:05.884)       0:00:05.924
> ******** 
> ok: [localhost]
> 
> TASK [debug]
> *******************************************************************
> Tuesday 09 August 2016  10:37:28 -0400 (0:00:00.046)       0:00:05.970
> ******** 
> ok: [localhost] => {
>     "aws_access_key": "hue"
> }
> 
> TASK [debug]
> *******************************************************************
> Tuesday 09 August 2016  10:37:29 -0400 (0:00:00.042)       0:00:06.013
> ******** 
> ok: [localhost] => {
>     "aws_secret_key": "hae"
> }
> 
> PLAY RECAP
> *********************************************************************
> localhost                  : ok=4    changed=0    unreachable=0    failed=0

Just like comment 2, it also works in my test playbook, this bug is saying it does not work for openshift-ansible installer.

Re-test this bug with 3.3/2016-08-10.5 puddle, still failed.
Here is my inventory host file:
[OSEv3:children]
masters
nodes
nfs

[OSEv3:vars]
#The following parameters is used by openshift-ansible
ansible_ssh_user=root
openshift_cloudprovider_kind=aws
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_master_default_subdomain=0811-80u.qe.rhcloud.com
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}]
deployment_type=openshift-enterprise
oreg_url=registry.ops.openshift.com/openshift3/ose-${component}:${version}
openshift_docker_additional_registries=registry.ops.openshift.com
openshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888,virt-openshift-05.lab.eng.nay.redhat.com:5000,virt-openshift-05.lab.eng.nay.redhat.com:5001,registry.ops.openshift.com
osm_use_cockpit=false
osm_cockpit_plugins=['cockpit-kubernetes']
openshift_node_kubelet_args={"minimum-container-ttl-duration": ["10s"], "maximum-dead-containers-per-container": ["1"], "maximum-dead-containers": ["20"], "image-gc-high-threshold": ["80"], "image-gc-low-threshold": ["70"]}
openshift_hosted_registry_selector="role=node,registry=enabled"
openshift_hosted_router_selector="role=node,router=enabled"
openshift_hosted_router_registryurl=registry.ops.openshift.com/openshift3/ose-${component}:${version}
debug_level=5
openshift_set_hostname=true
openshift_override_hostname_check=true
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"
openshift_hosted_registry_storage_nfs_directory=/var/lib/exports
openshift_hosted_registry_storage_volume_name=regpv
openshift_hosted_registry_storage_access_modes=["ReadWriteMany"]
openshift_hosted_registry_storage_volume_size=17G

[masters]
ec2-54-242-115-145.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" openshift_public_hostname=ec2-54-242-115-145.compute-1.amazonaws.com


[nodes]
ec2-54-242-115-145.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" openshift_public_hostname=ec2-54-242-115-145.compute-1.amazonaws.com openshift_node_labels="{'role': 'node'}"

ec2-54-211-99-97.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" openshift_public_hostname=ec2-54-211-99-97.compute-1.amazonaws.com openshift_node_labels="{'role': 'node','registry': 'enabled','router': 'enabled'}"


[nfs]
ec2-54-242-115-145.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem"

Installation failed at the following step:
TASK [openshift_master : Start and enable master] ******************************
Thursday 11 August 2016  02:47:26 +0000 (0:00:00.072)       0:09:24.285 ******* 

fatal: [ec2-54-242-115-145.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service atomic-openshift-master: Job for atomic-openshift-master.service failed because the control process exited with error code. See \"systemctl status atomic-openshift-master.service\" and \"journalctl -xe\" for details.\n"}


That is because installer write an invalid value in config file:
# cat /etc/sysconfig/atomic-openshift-master
OPTIONS=--loglevel=2
CONFIG_FILE=/etc/origin/master/master-config.yaml

AWS_ACCESS_KEY_ID={# lookup('env','AWS_ACCESS_KEY_ID') #}
AWS_SECRET_ACCESS_KEY={# lookup('env','AWS_SECRET_ACCESS_KEY') #}


This is the same behavior as what is described in my initial report.

Comment 6 Andrew Butcher 2016-09-07 18:04:31 UTC
Proposed fix: https://github.com/openshift/openshift-ansible/pull/2364

Comment 8 Johnny Liu 2016-10-10 10:09:40 UTC
Verified this bug with openshift-ansible-3.3.30-1.git.0.b260e04.el7.noarch, and PASS.

Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY could set openshift_cloudprovider_aws_{access,secret}_key successfully in config files.

Comment 10 errata-xmlrpc 2016-10-27 16:12:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:2122


Note You need to log in before you can comment on or make changes to this bug.