Bug 1472352 - Installs failing on S3 configuration due to missing openshift_hosted_registry_storage_s3_kmskeyid
Installs failing on S3 configuration due to missing openshift_hosted_registry...
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
3.6.0
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Kenny Woodson
Mike Fiedler
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-18 10:20 EDT by Mike Fiedler
Modified: 2017-08-16 15 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-10 01:31:01 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mike Fiedler 2017-07-18 10:20:32 EDT
Description of problem:

openshift-ansible installs from master started failing today (18 June 2017) with the following error when s3 registry is configured:

fatal: [ec2-54-218-53-245.us-west-2.compute.amazonaws.com]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'openshift_hosted_registry_storage_s3_kmskeyid' is undefined\n\nThe error appears to have been in '/home/slave3/workspace/Launch Environment Flexy/private-openshift-ansible/roles/openshift_hosted/tasks/registry/storage/object_storage.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Ensure the resgistry secret exists\n  ^ here\n"}

Version-Release number of selected component (if applicable): openshift-ansible master @ a41fc449c3e56646365701e5b7a40b29eb9e3b67


How reproducible: Always when s3 is configured as registry storage


Steps to Reproduce:

byo/config.yml install with inventory below

Actual results:

ansible/roles/openshift_hosted/tasks/registry/storage/object_storage.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Ensure the resgistry secret exists\n  ^ here\n"}

Expected results:

Good install or documentation of new configuration parameter.  Maybe we missed a card.


Additional info:


[OSEv3:children]
masters
nodes

etcd

[OSEv3:vars]

#The following parameters is used by post-actions
iaas_name=AWS
use_rpm_playbook=true
openshift_playbook_rpm_repos=[{'id': 'aos-playbook-rpm', 'name': 'aos-playbook-rpm', 'baseurl': 'http://download.eng.bos.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift/3.6/latest/x86_64/os', 'enabled': 1, 'gpgcheck': 0}]

update_is_images_url=registry.ops.openshift.com

#The following parameters is used by openshift-ansible
ansible_ssh_user=root

openshift_cloudprovider_kind=aws

openshift_cloudprovider_aws_access_key=<key>
openshift_cloudprovider_aws_secret_key=<secretkey>
openshift_master_default_subdomain_enable=true
openshift_master_default_subdomain=0718-ice.qe.rhcloud.com


openshift_auth_type=allowall

openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]

deployment_type=openshift-enterprise
openshift_cockpit_deployer_prefix=registry.ops.openshift.com/openshift3/
osm_cockpit_plugins=['cockpit-kubernetes']
osm_use_cockpit=false
oreg_url=registry.ops.openshift.com/openshift3/ose-${component}:${version}
openshift_docker_additional_registries=registry.ops.openshift.com
openshift_docker_insecure_registries=registry.ops.openshift.com
use_cluster_metrics=true
openshift_master_cluster_method=native
openshift_master_dynamic_provisioning_enabled=true
osm_default_node_selector=region=primary
openshift_disable_check=disk_availability,memory_availability
openshift_master_portal_net=172.24.0.0/14
osm_cluster_network_cidr=172.20.0.0/14
osm_host_subnet_length=8
openshift_node_kubelet_args={"pods-per-core": ["0"], "max-pods": ["250"],"minimum-container-ttl-duration": ["10s"], "maximum-dead-containers-per-container": ["1"], "maximum-dead-containers": ["20"], "image-gc-high-threshold": ["80"], "image-gc-low-threshold": ["70"]}
openshift_registry_selector="region=infra,zone=default"
openshift_hosted_router_selector="region=infra,zone=default"
openshift_hosted_router_registryurl=registry.ops.openshift.com/openshift3/ose-${component}:${version}
debug_level=2
openshift_set_hostname=true
openshift_override_hostname_check=true
os_sdn_network_plugin_name=redhat/openshift-ovs-subnet
openshift_hosted_router_replicas=1
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey=<key>
openshift_hosted_registry_storage_s3_secretkey=<secretkey>
openshift_hosted_registry_storage_s3_bucket=aoe-svt-test
openshift_hosted_registry_storage_s3_region=us-west-2
openshift_hosted_registry_replicas=1
openshift_hosted_metrics_deploy=false
openshift_hosted_metrics_deployer_prefix=registry.ops.openshift.com/openshift3/
openshift_hosted_metrics_deployer_version=3.6.0
openshift_hosted_metrics_storage_kind=dynamic
openshift_hosted_metrics_storage_volume_size=25Gi
openshift_hosted_logging_deploy=false
openshift_hosted_logging_deployer_prefix=registry.ops.openshift.com/openshift3/
openshift_hosted_logging_elasticsearch_pvc_size=25Gi
openshift_hosted_logging_storage_kind=dynamic
openshift_hosted_logging_deployer_version=3.6.0
openshift_hosted_logging_elasticsearch_cluster_size=1


[lb]


[etcd]
ec2-54-202-5-75.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-54-202-5-75.us-west-2.compute.amazonaws.com


[masters]
ec2-54-202-5-75.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-54-202-5-75.us-west-2.compute.amazonaws.com

[nodes]
ec2-54-202-5-75.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-54-202-5-75.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_scheduleable=false

ec2-54-201-202-43.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-54-201-202-43.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

ec2-54-201-202-43.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-54-201-202-43.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

ec2-54-200-19-6.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-54-200-19-6.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
ec2-34-223-224-143.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root  openshift_public_hostname=ec2-34-223-224-143.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

[nfs]
Comment 3 Mike Fiedler 2017-07-21 11:04:37 EDT
Verified on openshift-ansible.noarch        3.6.153-1.git.0.5a6bf7d.el7
Comment 5 errata-xmlrpc 2017-08-10 01:31:01 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1716

Note You need to log in before you can comment on or make changes to this bug.