Bug 2037944

Summary: Customizing the OAuth server URL does not apply to upgraded cluster
Product: OpenShift Container Platform Reporter: OpenShift BugZilla Robot <openshift-bugzilla-robot>
Component: oauth-apiserverAssignee: Standa Laznicka <slaznick>
Status: CLOSED ERRATA QA Contact: Xingxing Xia <xxia>
Severity: high Docs Contact:
Priority: high    
Version: 4.9CC: kostrows, mfojtik, surbania, xxia
Target Milestone: ---   
Target Release: 4.9.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-07-12 13:04:04 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2030961    
Bug Blocks:    

Comment 5 Xingxing Xia 2022-07-08 06:28:01 UTC
Save normal route/oauth-openshift json
$ oc get route oauth-openshift -n openshift-authentication -o json > route_oauth-openshift.json
Check the "ingress" array under "status" field, it is as below:
$ cat route_oauth-openshift.json
...
        "ingress": [
            {
                "conditions": [
                    {
                        "lastTransitionTime": "2022-07-08T01:39:54Z",
                        "status": "True",
                        "type": "Admitted"
                    }
                ],
                "host": "oauth-openshift.apps.<snipped>.openshift.com",
                "routerCanonicalHostname": "router-default.apps.<snipped>.openshift.com",
                "routerName": "default",
                "wildcardPolicy": "None"
            }
        ]
...

As per analysis in https://bugzilla.redhat.com/show_bug.cgi?id=2030961#c13, messing up the oauth-openshift route's "status" caused the degraded issue. So messing up it by adding two more elements which are copied from customer degraded env's must-gather under "ingress" array as below:
$ vi route_oauth-openshift.json
...
        "ingress": [
            {
              "conditions": [
                {
                  "lastTransitionTime": "2021-09-17T05:55:25Z",
                  "status": "True",
                  "type": "Admitted"
                }
              ],
              "host": "oauth-openshift.se1.ebaykorea.com",
              "routerCanonicalHostname": "se1.ebaykorea.com",
              "routerName": "default",
              "wildcardPolicy": "None"
            },
            {
                "conditions": [
                    {
                        "lastTransitionTime": "2022-07-08T01:39:54Z",
                        "status": "True",
                        "type": "Admitted"
                    }
                ],
                "host": "oauth-openshift.apps.<snipped>.openshift.com",
                "routerCanonicalHostname": "router-default.apps.<snipped>.openshift.com",
                "routerName": "default",
                "wildcardPolicy": "None"
            },
            {
              "conditions": [
                {
                  "lastTransitionTime": "2021-10-05T00:53:55Z",
                  "status": "True",
                  "type": "Admitted"
                }
              ],
              "host": "oauth-openshift.se1.ebaykorea.com",
              "routerCanonicalHostname": "public.ebaykorea.com",
              "routerName": "public",
              "wildcardPolicy": "None"
            }
        ]
...

Then send request to update route/oauth-openshift with the messed up "status" content:
$ oc login -u kubeadmin
$ KUBEADMIN_TOKEN=$(oc whoami -t)
$ curl -k -X PUT -d @route_oauth-openshift.json -H 'Content-Type: application/json' -H "Authorization: Bearer $KUBEADMIN_TOKEN" "$(oc whoami --show-server)/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift/status"

Check what the "status" looks after mess:
$ oc get route oauth-openshift -n openshift-authentication -o yaml | grep -A 100 "^status"
status:
  ingress:
  - conditions:
    - lastTransitionTime: "2022-07-08T02:56:49Z"
      status: "True"
      type: Admitted
    host: oauth-openshift.apps.<snipped>.openshift.com
    routerCanonicalHostname: router-default.apps.<snipped>.openshift.com
    routerName: default
    wildcardPolicy: None
  - conditions:
    - lastTransitionTime: "2022-07-08T01:39:54Z"
      status: "True"
      type: Admitted
    host: oauth-openshift.apps.<snipped>.openshift.com
    routerCanonicalHostname: router-default.apps.<snipped>.openshift.com
    routerName: default
    wildcardPolicy: None
  - conditions:
    - lastTransitionTime: "2021-10-05T00:53:55Z"
      status: "True"
      type: Admitted
    host: oauth-openshift.se1.ebaykorea.com
    routerCanonicalHostname: public.ebaykorea.com
    routerName: public
    wildcardPolicy: None

Observe for some time, the authentication doesn't become degraded.
$ oc get co authentication
NAME             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication   4.9.42    True        False         False      4h10m

So issue is fixed in 4.9.42. Moving to VERIFIED.

Comment 7 errata-xmlrpc 2022-07-12 13:04:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.42 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5509