Bug 1744599

Summary: Authentication operator persistently rolls out oauth-openshift pods
Product: OpenShift Container Platform Reporter: Robert Sandu <rsandu>
Component: apiserver-authAssignee: Mo <mkhan>
Status: CLOSED ERRATA QA Contact: scheng
Severity: medium Docs Contact:
Priority: medium    
Version: 4.1.0CC: aos-bugs, mfojtik, mkhan, nagrawal, rcarrata
Target Milestone: ---Keywords: OSE41z_next
Target Release: 4.1.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1747480 (view as bug list) Environment:
Last Closed: 2019-09-20 12:29:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1747480    
Bug Blocks:    

Description Robert Sandu 2019-08-22 13:58:13 UTC
Description of problem: authentication operator persistently rolls out oauth-openshift pods. 

Per the openshift-authentication-operator pod logs, the issue seems to point out to an unsuccessful TLS health check of the `'https://oauth-openshift.apps.<SUBDOMAIN>` server route:

E0814 10:03:23.128817       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
I0814 10:03:23.129117       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-08T09:13:12Z","message":"RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-08-14T07:49:53Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2019-08-10T21:37:39Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2019-08-11T13:29:27Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}

Version-Release number of selected component (if applicable): 4.1.9

Actual results: authentication operator keeps rolling out oauth-openshift pods in a loop, which leads to authentication service disruption.

Expected results: authentication operator to only rollout oauth-openshift pods when necessary.

Comment 5 Mo 2019-08-23 16:56:19 UTC
The logs do not contain events / pod logs from the operator's namespace.  There also seems to have been some attempt to disable the authn operator (it was applied to the wrong resource though see "oc get authentication.config vs oc get authentication.operator - the latter is what can be used to pause the operator).

Please make sure that the operator is running, and then directly collect pod logs, events, pods, deployments, etc from the openshift-authentication-operator namespace.

The following command will cause the operator to emit an enormous amount of logs and will make it easier to understand what the issue is:

oc patch authentication.operator cluster --type=merge -p "{\"spec\":{\"operatorLogLevel\": \"TraceAll\"}}"

This can be disabled by:

oc patch authentication.operator cluster --type=merge -p "{\"spec\":{\"operatorLogLevel\": \"\"}}"

Comment 18 errata-xmlrpc 2019-09-20 12:29:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2768