Bug 1504191 - Logging deploy configuring a bad oauth-proxy image location
Summary: Logging deploy configuring a bad oauth-proxy image location
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.7.0
Assignee: Scott Dodson
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-19 16:11 UTC by Mike Fiedler
Modified: 2017-11-28 22:18 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-11-28 22:18:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:3188 0 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Container Platform 3.7 security, bug, and enhancement update 2017-11-29 02:34:54 UTC

Description Mike Fiedler 2017-10-19 16:11:47 UTC
Description of problem:

Deploying logging with openshift-ansible master is configuring a bad oauth-proxy image location for the loggin-es elasticsearch DC

It is configuring:  registry.access.redhat.com/openshift3/oauth-proxy:v3.7

This leaves elasticsearch pods in ImagePullBackoff because this image does not exist.

Previously, it was configuring openshift/oauth-proxy:v1.0.0  which works fine

openshift-ansible is 70d7173aef356f834c1d4c7cd533170f13f9f665.   I tried installing using 3.7.0-0.158.0 but hit a different bug which is fixed in master latest.


How reproducible: Always

Steps to Reproduce:
1. Deploy logging with the inventory below using openshift-ansible master


Actual results:

Elasticsearch pods will not start.  Stuck in ImagePullBackoff due to bad oauth-proxy image configuration 


Additional info:

[OSEv3:children]
masters
etcd

[masters]
ec2-54-212-205-86.us-west-2.compute.amazonaws.com

[etcd]
ec2-54-212-205-86.us-west-2.compute.amazonaws.com
[OSEv3:vars]
deployment_type=openshift-enterprise

openshift_deployment_type=openshift-enterprise
openshift_release=v3.7
openshift_image_tag=v3.7.0



openshift_logging_install_logging=true
openshift_logging_master_url=https://ec2-54-212-205-86.us-west-2.compute.amazonaws.com:8443
openshift_logging_master_public_url=https://ec2-54-212-205-86.us-west-2.compute.amazonaws.com:8443
openshift_logging_kibana_hostname=kibana.apps.1019-x2k.qe.rhcloud.com
openshift_logging_namespace=logging
openshift_logging_image_prefix=registry.ops.openshift.com/openshift3/
openshift_logging_image_version=v3.7
openshift_logging_es_cluster_size=2
openshift_logging_es_pvc_dynamic=true
openshift_logging_elasticsearch_pvc_size=100Gi
openshift_logging_fluentd_read_from_head=false
openshift_logging_use_mux=false
openshift_logging_use_ops=false

Comment 2 ewolinet 2017-10-19 18:20:37 UTC
This is due to the fact that the image won't be built until GA.
I'll update the image used for enterprise until it's built.

Comment 5 Anping Li 2017-10-26 07:28:10 UTC
Verified and pass with openshift-ansible-3.7.0-0.178.0.

Comment 8 errata-xmlrpc 2017-11-28 22:18:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188


Note You need to log in before you can comment on or make changes to this bug.