Bug 2037944 - Customizing the OAuth server URL does not apply to upgraded cluster
Summary: Customizing the OAuth server URL does not apply to upgraded cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oauth-apiserver
Version: 4.9
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.9.z
Assignee: Standa Laznicka
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On: 2030961
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-06 21:39 UTC by OpenShift BugZilla Robot
Modified: 2022-07-12 13:04 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-12 13:04:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-authentication-operator pull 531 0 None open [release-4.9] Bug 2037944: endpoints checker: check only the custom hostname if configured 2022-05-13 01:01:38 UTC
Red Hat Product Errata RHBA-2022:5509 0 None None None 2022-07-12 13:04:13 UTC

Comment 5 Xingxing Xia 2022-07-08 06:28:01 UTC
Save normal route/oauth-openshift json
$ oc get route oauth-openshift -n openshift-authentication -o json > route_oauth-openshift.json
Check the "ingress" array under "status" field, it is as below:
$ cat route_oauth-openshift.json
...
        "ingress": [
            {
                "conditions": [
                    {
                        "lastTransitionTime": "2022-07-08T01:39:54Z",
                        "status": "True",
                        "type": "Admitted"
                    }
                ],
                "host": "oauth-openshift.apps.<snipped>.openshift.com",
                "routerCanonicalHostname": "router-default.apps.<snipped>.openshift.com",
                "routerName": "default",
                "wildcardPolicy": "None"
            }
        ]
...

As per analysis in https://bugzilla.redhat.com/show_bug.cgi?id=2030961#c13, messing up the oauth-openshift route's "status" caused the degraded issue. So messing up it by adding two more elements which are copied from customer degraded env's must-gather under "ingress" array as below:
$ vi route_oauth-openshift.json
...
        "ingress": [
            {
              "conditions": [
                {
                  "lastTransitionTime": "2021-09-17T05:55:25Z",
                  "status": "True",
                  "type": "Admitted"
                }
              ],
              "host": "oauth-openshift.se1.ebaykorea.com",
              "routerCanonicalHostname": "se1.ebaykorea.com",
              "routerName": "default",
              "wildcardPolicy": "None"
            },
            {
                "conditions": [
                    {
                        "lastTransitionTime": "2022-07-08T01:39:54Z",
                        "status": "True",
                        "type": "Admitted"
                    }
                ],
                "host": "oauth-openshift.apps.<snipped>.openshift.com",
                "routerCanonicalHostname": "router-default.apps.<snipped>.openshift.com",
                "routerName": "default",
                "wildcardPolicy": "None"
            },
            {
              "conditions": [
                {
                  "lastTransitionTime": "2021-10-05T00:53:55Z",
                  "status": "True",
                  "type": "Admitted"
                }
              ],
              "host": "oauth-openshift.se1.ebaykorea.com",
              "routerCanonicalHostname": "public.ebaykorea.com",
              "routerName": "public",
              "wildcardPolicy": "None"
            }
        ]
...

Then send request to update route/oauth-openshift with the messed up "status" content:
$ oc login -u kubeadmin
$ KUBEADMIN_TOKEN=$(oc whoami -t)
$ curl -k -X PUT -d @route_oauth-openshift.json -H 'Content-Type: application/json' -H "Authorization: Bearer $KUBEADMIN_TOKEN" "$(oc whoami --show-server)/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift/status"

Check what the "status" looks after mess:
$ oc get route oauth-openshift -n openshift-authentication -o yaml | grep -A 100 "^status"
status:
  ingress:
  - conditions:
    - lastTransitionTime: "2022-07-08T02:56:49Z"
      status: "True"
      type: Admitted
    host: oauth-openshift.apps.<snipped>.openshift.com
    routerCanonicalHostname: router-default.apps.<snipped>.openshift.com
    routerName: default
    wildcardPolicy: None
  - conditions:
    - lastTransitionTime: "2022-07-08T01:39:54Z"
      status: "True"
      type: Admitted
    host: oauth-openshift.apps.<snipped>.openshift.com
    routerCanonicalHostname: router-default.apps.<snipped>.openshift.com
    routerName: default
    wildcardPolicy: None
  - conditions:
    - lastTransitionTime: "2021-10-05T00:53:55Z"
      status: "True"
      type: Admitted
    host: oauth-openshift.se1.ebaykorea.com
    routerCanonicalHostname: public.ebaykorea.com
    routerName: public
    wildcardPolicy: None

Observe for some time, the authentication doesn't become degraded.
$ oc get co authentication
NAME             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication   4.9.42    True        False         False      4h10m

So issue is fixed in 4.9.42. Moving to VERIFIED.

Comment 7 errata-xmlrpc 2022-07-12 13:04:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.42 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5509


Note You need to log in before you can comment on or make changes to this bug.