Bug 1342028
| Summary: | user can not make use of environment variables to set openshift_cloudprovider_aws_{access,secret}_key within the ansible inventory | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Johnny Liu <jialiu> | |
| Component: | Installer | Assignee: | Andrew Butcher <abutcher> | |
| Status: | CLOSED ERRATA | QA Contact: | Johnny Liu <jialiu> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | low | |||
| Version: | 3.2.0 | CC: | aos-bugs, bleanhar, jialiu, jokerman, mmccomas | |
| Target Milestone: | --- | |||
| Target Release: | 3.3.1 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: |
Previously, environment variable lookups and other variable expansion within the ansible inventory would not be correctly interpreted. These variables are now interpreted correctly, for example: openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" causes the AWS_ACCESS_KEY_ID environment variable to be set as the AWS cloud provider access key.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1381710 (view as bug list) | Environment: | ||
| Last Closed: | 2016-10-27 16:12:34 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1381710 | |||
|
Description
Johnny Liu
2016-06-02 09:53:22 UTC
Can you verify the variables are set? (In reply to Brenton Leanhardt from comment #1) > Can you verify the variables are set? Yes, I set it in my localhost where ansible playbook is running. And I verify it works well using my own debugging playbook, 'AWS_ACCESS_KEY_ID' and 'AWS_SECRET_ACCESS_KEY' is printed out successfully. $ cat debug-playbook.yaml --- - name: Debugging hosts: masters[0] tasks: - set_fact: aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}" aws_secret_key: "{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}" - debug: var=aws_access_key Review my initial report, even if I did not set variables, AWS_ACCESS_KEY_ID should be set to empty, but not {# lookup('env','AWS_ACCESS_KEY_ID') #}. Extended the playbook above to output secret key. Works in my testing with ansible-2.1.0.0-1.
$ AWS_ACCESS_KEY_ID=hue AWS_SECRET_ACCESS_KEY=hae ansible-playbook ~/debug.yml
PLAY [Debugging] ***************************************************************
TASK [setup] *******************************************************************
Tuesday 09 August 2016 10:37:23 -0400 (0:00:00.039) 0:00:00.040 ********
ok: [localhost]
TASK [set_fact] ****************************************************************
Tuesday 09 August 2016 10:37:28 -0400 (0:00:05.884) 0:00:05.924 ********
ok: [localhost]
TASK [debug] *******************************************************************
Tuesday 09 August 2016 10:37:28 -0400 (0:00:00.046) 0:00:05.970 ********
ok: [localhost] => {
"aws_access_key": "hue"
}
TASK [debug] *******************************************************************
Tuesday 09 August 2016 10:37:29 -0400 (0:00:00.042) 0:00:06.013 ********
ok: [localhost] => {
"aws_secret_key": "hae"
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0
(In reply to Andrew Butcher from comment #4) > Extended the playbook above to output secret key. Works in my testing with > ansible-2.1.0.0-1. > > > $ AWS_ACCESS_KEY_ID=hue AWS_SECRET_ACCESS_KEY=hae ansible-playbook > ~/debug.yml > > PLAY [Debugging] > *************************************************************** > > TASK [setup] > ******************************************************************* > Tuesday 09 August 2016 10:37:23 -0400 (0:00:00.039) 0:00:00.040 > ******** > ok: [localhost] > > TASK [set_fact] > **************************************************************** > Tuesday 09 August 2016 10:37:28 -0400 (0:00:05.884) 0:00:05.924 > ******** > ok: [localhost] > > TASK [debug] > ******************************************************************* > Tuesday 09 August 2016 10:37:28 -0400 (0:00:00.046) 0:00:05.970 > ******** > ok: [localhost] => { > "aws_access_key": "hue" > } > > TASK [debug] > ******************************************************************* > Tuesday 09 August 2016 10:37:29 -0400 (0:00:00.042) 0:00:06.013 > ******** > ok: [localhost] => { > "aws_secret_key": "hae" > } > > PLAY RECAP > ********************************************************************* > localhost : ok=4 changed=0 unreachable=0 failed=0 Just like comment 2, it also works in my test playbook, this bug is saying it does not work for openshift-ansible installer. Re-test this bug with 3.3/2016-08-10.5 puddle, still failed. Here is my inventory host file: [OSEv3:children] masters nodes nfs [OSEv3:vars] #The following parameters is used by openshift-ansible ansible_ssh_user=root openshift_cloudprovider_kind=aws openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" openshift_master_default_subdomain=0811-80u.qe.rhcloud.com openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}] deployment_type=openshift-enterprise oreg_url=registry.ops.openshift.com/openshift3/ose-${component}:${version} openshift_docker_additional_registries=registry.ops.openshift.com openshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888,virt-openshift-05.lab.eng.nay.redhat.com:5000,virt-openshift-05.lab.eng.nay.redhat.com:5001,registry.ops.openshift.com osm_use_cockpit=false osm_cockpit_plugins=['cockpit-kubernetes'] openshift_node_kubelet_args={"minimum-container-ttl-duration": ["10s"], "maximum-dead-containers-per-container": ["1"], "maximum-dead-containers": ["20"], "image-gc-high-threshold": ["80"], "image-gc-low-threshold": ["70"]} openshift_hosted_registry_selector="role=node,registry=enabled" openshift_hosted_router_selector="role=node,router=enabled" openshift_hosted_router_registryurl=registry.ops.openshift.com/openshift3/ose-${component}:${version} debug_level=5 openshift_set_hostname=true openshift_override_hostname_check=true openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_registry_storage_nfs_directory=/var/lib/exports openshift_hosted_registry_storage_volume_name=regpv openshift_hosted_registry_storage_access_modes=["ReadWriteMany"] openshift_hosted_registry_storage_volume_size=17G [masters] ec2-54-242-115-145.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" openshift_public_hostname=ec2-54-242-115-145.compute-1.amazonaws.com [nodes] ec2-54-242-115-145.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" openshift_public_hostname=ec2-54-242-115-145.compute-1.amazonaws.com openshift_node_labels="{'role': 'node'}" ec2-54-211-99-97.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" openshift_public_hostname=ec2-54-211-99-97.compute-1.amazonaws.com openshift_node_labels="{'role': 'node','registry': 'enabled','router': 'enabled'}" [nfs] ec2-54-242-115-145.compute-1.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/libra.pem" Installation failed at the following step: TASK [openshift_master : Start and enable master] ****************************** Thursday 11 August 2016 02:47:26 +0000 (0:00:00.072) 0:09:24.285 ******* fatal: [ec2-54-242-115-145.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service atomic-openshift-master: Job for atomic-openshift-master.service failed because the control process exited with error code. See \"systemctl status atomic-openshift-master.service\" and \"journalctl -xe\" for details.\n"} That is because installer write an invalid value in config file: # cat /etc/sysconfig/atomic-openshift-master OPTIONS=--loglevel=2 CONFIG_FILE=/etc/origin/master/master-config.yaml AWS_ACCESS_KEY_ID={# lookup('env','AWS_ACCESS_KEY_ID') #} AWS_SECRET_ACCESS_KEY={# lookup('env','AWS_SECRET_ACCESS_KEY') #} This is the same behavior as what is described in my initial report. Proposed fix: https://github.com/openshift/openshift-ansible/pull/2364 Verified this bug with openshift-ansible-3.3.30-1.git.0.b260e04.el7.noarch, and PASS.
Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY could set openshift_cloudprovider_aws_{access,secret}_key successfully in config files.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:2122 |