Description of problem: With direct install of OCP 4.7 enabling external cluster proxy, Authentication co is getting degraded Co status: authentication 4.7.0-0.nightly-ppc64le-2021-01-23-071429 True False True 41h ``` Message: ProxyConfigControllerDegraded: failed to reach endpoint("https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz") missing in NO_PROXY(".cluster.local,.satwin-proxy.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,9.114.96.0/22,api-int.satwin-proxy.redhat.com,localhost") with error: Get "https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz": Service Unavailable Reason: ProxyConfigController_SyncError ``` Build tried on: 1. 4.7.0-0.nightly-ppc64le-2021-01-23-071429 2. 4.7.0-0.nightly-ppc64le-2021-01-24-004926 How reproducible: Direct install of OCP 4.7 enabling the external cluster proxy Actual results: Co status: authentication 4.7.0-0.nightly-ppc64le-2021-01-23-071429 True False True 41h Additional info: ## oc version ``` [root@satwin-proxy-bastion ~]# oc version Client Version: 4.7.0-0.nightly-ppc64le-2021-01-23-071429 Server Version: 4.7.0-0.nightly-ppc64le-2021-01-23-071429 Kubernetes Version: v1.20.0+70dd98e [root@satwin-proxy-bastion ~]# ``` ## oc get nodes ``` [root@satwin-proxy-bastion ~]# oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 2d1h v1.20.0+f0a2ec9 master-1 Ready master 2d1h v1.20.0+f0a2ec9 master-2 Ready master 2d1h v1.20.0+f0a2ec9 worker-0 Ready worker 2d v1.20.0+f0a2ec9 worker-1 Ready worker 2d v1.20.0+f0a2ec9 ``` Pods related to authentication namespaces ``` [root@satwin-proxy-bastion ~]# oc get pods -n openshift-authentication-operator NAME READY STATUS RESTARTS AGE authentication-operator-76575f84c6-wtzwj 1/1 Running 1 2d2h [root@satwin-proxy-bastion ~]# [root@satwin-proxy-bastion ~]# oc get pods -n openshift-authentication NAME READY STATUS RESTARTS AGE oauth-openshift-868dd9597d-4449m 1/1 Running 0 2d oauth-openshift-868dd9597d-r8d7g 1/1 Running 0 2d [root@satwin-proxy-bastion ~]# [root@satwin-proxy-bastion ~]# oc get pods -n openshift-oauth-apiserver NAME READY STATUS RESTARTS AGE apiserver-cb7c4f6f8-g4kqq 1/1 Running 0 2d1h apiserver-cb7c4f6f8-g89zf 1/1 Running 0 2d1h apiserver-cb7c4f6f8-l7bzf 1/1 Running 0 2d1h [root@satwin-proxy-bastion ~]# [root@satwin-proxy-bastion ~]# oc get pods -n openshift-ingress NAME READY STATUS RESTARTS AGE router-default-6f4bf65545-cfxqv 1/1 Running 0 2d1h router-default-6f4bf65545-r95xq 1/1 Running 0 2d1h [root@satwin-proxy-bastion ~]# [root@satwin-proxy-bastion ~]# oc get pods -n openshift-config No resources found in openshift-config namespace. [root@satwin-proxy-bastion ~]# ``` ## logs ``` [root@satwin-proxy-bastion ~]# oc logs authentication-operator-76575f84c6-wtzwj -n openshift-authentication-operator 1 request.go:655] Throttling request took 2.59548059s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/secrets?labelSelector=encryption.apiserver.operator.openshift.io%2Fcomponent%3Dopenshift-oauth-apiserver I0127 11:36:01.168996 1 request.go:655] Throttling request took 2.392965203s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/pods?labelSelector=apiserver%3Dtrue I0127 11:36:02.169028 1 request.go:655] Throttling request took 2.395404188s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift I0127 11:36:03.169026 1 request.go:655] Throttling request took 2.394990457s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/secrets/v4-0-config-system-session I0127 11:36:04.169037 1 request.go:655] Throttling request took 2.394632369s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa I0127 11:36:05.369028 1 request.go:655] Throttling request took 2.195701869s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/secrets/v4-0-config-system-session E0127 11:36:06.255930 1 base_controller.go:250] "ProxyConfigController" controller failed to sync "key", err: failed to reach endpoint("https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz") missing in NO_PROXY(".cluster.local,.satwin-proxy.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,9.114.96.0/22,api-int.satwin-proxy.redhat.com,localhost") with error: Get "https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz": Service Unavailable I0127 11:36:06.369016 1 request.go:655] Throttling request took 1.788154105s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-868dd9597d-r8d7g I0127 11:36:07.569073 1 request.go:655] Throttling request took 1.791945847s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/pods?labelSelector=apiserver%3Dtrue I0127 11:36:08.768986 1 request.go:655] Throttling request took 1.395314102s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver I0127 11:36:09.769040 1 request.go:655] Throttling request took 1.189547833s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/pods?labelSelector=apiserver%3Dtrue I0127 11:36:10.969014 1 request.go:655] Throttling request took 1.194124584s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/encryption-config-2 I0127 11:36:12.169009 1 request.go:655] Throttling request took 1.195965944s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/secrets?labelSelector=encryption.apiserve ``` ## Authentication co description ``` [root@satwin-proxy-bastion origin]# oc describe co authentication Name: authentication Namespace: Labels: <none> Annotations: exclude.release.openshift.io/internal-openshift-hosted: true include.release.openshift.io/self-managed-high-availability: true include.release.openshift.io/single-node-developer: true API Version: config.openshift.io/v1 Kind: ClusterOperator Metadata: Creation Timestamp: 2021-01-25T08:49:13Z Generation: 1 Managed Fields: API Version: config.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:exclude.release.openshift.io/internal-openshift-hosted: f:include.release.openshift.io/self-managed-high-availability: f:include.release.openshift.io/single-node-developer: f:spec: f:status: .: f:extension: Manager: cluster-version-operator Operation: Update Time: 2021-01-25T08:49:13Z API Version: config.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:status: f:conditions: f:relatedObjects: f:versions: Manager: authentication-operator Operation: Update Time: 2021-01-25T09:46:11Z Resource Version: 47499 Self Link: /apis/config.openshift.io/v1/clusteroperators/authentication UID: c050aae7-e92d-4676-b862-3d72e5d91c7e Spec: Status: Conditions: Last Transition Time: 2021-01-25T09:44:16Z Message: ProxyConfigControllerDegraded: failed to reach endpoint("https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz") missing in NO_PROXY(".cluster.local,.satwin-proxy.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,9.114.96.0/22,api-int.satwin-proxy.redhat.com,localhost") with error: Get "https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz": Service Unavailable Reason: ProxyConfigController_SyncError Status: True Type: Degraded Last Transition Time: 2021-01-25T10:53:07Z Message: All is well Reason: AsExpected Status: False Type: Progressing Last Transition Time: 2021-01-25T11:04:46Z Message: OAuthServerDeploymentAvailable: availableReplicas==2 Reason: AsExpected Status: True Type: Available Last Transition Time: 2021-01-25T09:42:14Z Message: All is well Reason: AsExpected Status: True Type: Upgradeable Extension: <nil> Related Objects: Group: operator.openshift.io Name: cluster Resource: authentications Group: config.openshift.io Name: cluster Resource: authentications Group: config.openshift.io Name: cluster Resource: infrastructures Group: config.openshift.io Name: cluster Resource: oauths Group: route.openshift.io Name: oauth-openshift Namespace: openshift-authentication Resource: routes Group: Name: oauth-openshift Namespace: openshift-authentication Resource: services Group: Name: openshift-config Resource: namespaces Group: Name: openshift-config-managed Resource: namespaces Group: Name: openshift-authentication Resource: namespaces Group: Name: openshift-authentication-operator Resource: namespaces Group: Name: openshift-ingress Resource: namespaces Group: Name: openshift-oauth-apiserver Resource: namespaces Versions: Name: oauth-apiserver Version: 4.7.0-0.nightly-ppc64le-2021-01-23-071429 Name: oauth-openshift Version: 4.7.0-0.nightly-ppc64le-2021-01-23-071429_openshift Name: operator Version: 4.7.0-0.nightly-ppc64le-2021-01-23-071429 Events: <none> [root@satwin-proxy-bastion origin]# ```
It's interesting that the error is: ProxyConfigControllerDegraded: failed to reach endpoint("https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz") missing in NO_PROXY(".cluster.local,.satwin-proxy.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,9.114.96.0/22,api-int.satwin-proxy.redhat.com,localhost") with error: Get "https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz": We can see that https://oauth-openshift.apps.satwin-proxy.redhat.com/healthz *should* match .satwin-proxy.redhat.com which is in the NO_PROXY config. I note that the docs suggest including '*' for wildcards so I'm wondering if they're missing in the spec or just omitted from the output/error Can we see the output of `oc describe proxy/cluster` ?
Output of `oc describe proxy/cluster` ``` # oc describe proxy/cluster Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Proxy Metadata: Creation Timestamp: 2021-01-25T08:48:56Z Generation: 1 Managed Fields: API Version: config.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:httpProxy: f:httpsProxy: f:noProxy: f:trustedCA: .: f:name: f:status: .: f:httpProxy: f:httpsProxy: f:noProxy: Manager: cluster-bootstrap Operation: Update Time: 2021-01-25T08:48:56Z Resource Version: 533 Self Link: /apis/config.openshift.io/v1/proxies/cluster UID: cd9bcf47-b338-4ab0-9e00-601c907d033c Spec: Http Proxy: http://9.114.99.234:3128 Https Proxy: http://9.114.99.234:3128 No Proxy: .satwin-proxy.redhat.com,9.114.96.0/22 Trusted CA: Name: Status: Http Proxy: http://9.114.99.234:3128 Https Proxy: http://9.114.99.234:3128 No Proxy: .cluster.local,.satwin-proxy.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,9.114.96.0/22,api-int.satwin-proxy.redhat.com,localhost Events: <none> ```
there was also this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1917114 which was fixed recently. Can you try with the latest nightly to see if it occurs?
Yes issue is not seen with the latest build `4.7.0-0.nightly-ppc64le-2021-02-01-211244` ``` # oc version Client Version: 4.7.0-0.nightly-ppc64le-2021-02-01-211244 Server Version: 4.7.0-0.nightly-ppc64le-2021-02-01-211244 Kubernetes Version: v1.20.0+3b90e69 ``` Authentication co status: ``` # oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0-0.nightly-ppc64le-2021-02-01-211244 True False False 152m ``` Output of `oc describe proxy/cluster` ``` # oc describe proxy/cluster Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Proxy Metadata: Creation Timestamp: 2021-02-02T11:58:20Z Generation: 1 Managed Fields: API Version: config.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:httpProxy: f:httpsProxy: f:noProxy: f:trustedCA: .: f:name: f:status: .: f:httpProxy: f:httpsProxy: f:noProxy: Manager: cluster-bootstrap Operation: Update Time: 2021-02-02T11:58:20Z Resource Version: 536 Self Link: /apis/config.openshift.io/v1/proxies/cluster UID: c3315b61-86c1-4f2a-8fef-1454e1548b09 Spec: Http Proxy: http://9.114.99.234:3128 Https Proxy: http://9.114.99.234:3128 No Proxy: .mtest-prviin47.redhat.com,9.114.96.0/22 Trusted CA: Name: Status: Http Proxy: http://9.114.99.234:3128 Https Proxy: http://9.114.99.234:3128 No Proxy: .cluster.local,.mtest-prviin47.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,9.114.96.0/22,api-int.mtest-prviin47.redhat.com,localhost Events: <none> ```
Hi Satwinder, should we keep this issue open or should we close if the issue is not observed in the latest nightly?
*** This bug has been marked as a duplicate of bug 1917114 ***