Bug 1978774
| Summary: | Cluster-version operator loads proxy config from spec, not status | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | W. Trevor King <wking> | |
| Component: | Cluster Version Operator | Assignee: | W. Trevor King <wking> | |
| Status: | CLOSED ERRATA | QA Contact: | Johnny Liu <jialiu> | |
| Severity: | low | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 4.2.0 | CC: | aos-bugs, jokerman, palonsor, rsandu | |
| Target Milestone: | --- | |||
| Target Release: | 4.9.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Bug Fix | ||
| Doc Text: |
Cause: The cluster-version operator loaded its proxy configuration from the Proxy resource's spec properties, instead of from the status properties.
Consequence: The cluster-version operator was consuming the raw, admin-configured values, instead of only consuming values which had been verified by the network operator. When admins configured invalid values, the cluster-version operator would therefore be unable to reach the upstream update service or signature stores until the invalid values had been corrected.
Fix: The cluster-version operator transitioned to using the status properties.
Result: The cluster-version operator will continue to use the verified status properties, regardless of the presence of invalid values in the Proxy spec properties.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1980411 (view as bug list) | Environment: | ||
| Last Closed: | 2021-10-18 17:38:01 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1980411 | |||
|
Description
W. Trevor King
2021-07-02 17:15:11 UTC
Reported by Pablo in [1]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1978749#c1 Verification for this probably looks like: 1. Install a proxy cluster. 2. Stuff some invalid, non-URI content into Proxy's spec.httpsProxy. 3. See that the CVO continues to use the previous proxy value which worked, and does not pick up the broken value. And then maybe again with a non-proxy install? Reproduce this bug with 4.8.0-rc.2.
[root@preserve-jialiu-ansible ~]# oc get proxies.config.openshift.io cluster -o json
{
"apiVersion": "config.openshift.io/v1",
"kind": "Proxy",
"metadata": {
"creationTimestamp": "2021-07-05T07:48:18Z",
"generation": 2,
"name": "cluster",
"resourceVersion": "31728",
"uid": "1530d6ad-1021-4e91-892f-79f72207053a"
},
"spec": {
"httpProxy": "proxy-user1",
"httpsProxy": "proxy-user1",
"noProxy": "test.no-proxy.com",
"trustedCA": {
"name": ""
}
},
"status": {
"httpProxy": "http://proxy-user1:xxxx@10.0.0.2:3128",
"httpsProxy": "http://proxy-user1:xxxx@10.0.0.2:3128",
"noProxy": ".cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.jialiu197877a.qe.gcp.devcluster.openshift.com,localhost,metadata,metadata.google.internal,metadata.google.internal.,test.no-proxy.com"
}
}
[root@preserve-jialiu-ansible ~]# oc describe co network
Name: network
Namespace:
Labels: <none>
Annotations: include.release.openshift.io/ibm-cloud-managed: true
include.release.openshift.io/self-managed-high-availability: true
include.release.openshift.io/single-node-developer: true
network.operator.openshift.io/last-seen-state: {"DaemonsetStates":[],"DeploymentStates":[]}
API Version: config.openshift.io/v1
Kind: ClusterOperator
Metadata:
Creation Timestamp: 2021-07-05T07:48:23Z
Generation: 1
Managed Fields:
API Version: config.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:include.release.openshift.io/ibm-cloud-managed:
f:include.release.openshift.io/self-managed-high-availability:
f:include.release.openshift.io/single-node-developer:
f:spec:
f:status:
.:
f:extension:
Manager: cluster-version-operator
Operation: Update
Time: 2021-07-05T07:48:23Z
API Version: config.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:network.operator.openshift.io/last-seen-state:
f:status:
f:conditions:
f:relatedObjects:
f:versions:
Manager: cluster-network-operator
Operation: Update
Time: 2021-07-05T07:54:24Z
Resource Version: 31731
UID: 5fd04c31-ac3b-4d31-820c-8729613aceac
Spec:
Status:
Conditions:
Last Transition Time: 2021-07-05T08:22:13Z
Message: The configuration is invalid for proxy 'cluster' (invalid httpProxy URI: parse "proxy-user1": invalid URI for request). Use 'oc edit proxy.config.openshift.io cluster' to fix.
Reason: InvalidProxyConfig
Status: True
Type: Degraded
Last Transition Time: 2021-07-05T07:53:44Z
Status: False
Type: ManagementStateDegraded
Last Transition Time: 2021-07-05T07:53:44Z
Status: True
Type: Upgradeable
Last Transition Time: 2021-07-05T08:06:17Z
Status: False
Type: Progressing
Last Transition Time: 2021-07-05T07:54:24Z
Status: True
Type: Available
Extension: <nil>
Related Objects:
<--snip-->
[root@preserve-jialiu-ansible ~]# oc adm upgrade
Cluster version is 4.8.0-rc.2
warning: Cannot display available updates:
Reason: RemoteFailed
Message: Unable to retrieve available updates: Get "https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.8&id=83e0b1a8-d7c0-4c5c-a775-1d6e2d853255&version=4.8.0-rc.2": proxyconnect tcp: dial tcp :0: connect: connection refused
Verified this bug with 4.9.0-0.nightly-2021-07-04-140102, and PASS.
[root@preserve-jialiu-ansible ~]# oc get proxies.config.openshift.io cluster -o json
{
"apiVersion": "config.openshift.io/v1",
"kind": "Proxy",
"metadata": {
"creationTimestamp": "2021-07-05T04:46:53Z",
"generation": 3,
"name": "cluster",
"resourceVersion": "81437",
"uid": "2f5d4f4d-6844-453d-9579-770b5a0be16c"
},
"spec": {
"httpProxy": "proxy-user1",
"httpsProxy": "proxy-user1",
"noProxy": "test.no-proxy.com",
"trustedCA": {
"name": ""
}
},
"status": {
"httpProxy": "http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@10.0.0.2:3128",
"httpsProxy": "http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@10.0.0.2:3128",
"noProxy": ".cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.jialiu1978774.qe.gcp.devcluster.openshift.com,localhost,metadata,metadata.google.internal,metadata.google.internal.,test.no-proxy.com"
}
}
[root@preserve-jialiu-ansible ~]# oc describe co network
Name: network
Namespace:
Labels: <none>
Annotations: include.release.openshift.io/ibm-cloud-managed: true
include.release.openshift.io/self-managed-high-availability: true
include.release.openshift.io/single-node-developer: true
network.operator.openshift.io/last-seen-state: {"DaemonsetStates":[],"DeploymentStates":[]}
API Version: config.openshift.io/v1
Kind: ClusterOperator
Metadata:
Creation Timestamp: 2021-07-05T04:46:57Z
Generation: 1
Managed Fields:
API Version: config.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:include.release.openshift.io/ibm-cloud-managed:
f:include.release.openshift.io/self-managed-high-availability:
f:include.release.openshift.io/single-node-developer:
f:ownerReferences:
.:
k:{"uid":"45654938-949c-49d3-b57e-e54c82910ab0"}:
.:
f:apiVersion:
f:kind:
f:name:
f:uid:
f:spec:
f:status:
.:
f:extension:
Manager: cluster-version-operator
Operation: Update
Time: 2021-07-05T04:46:58Z
API Version: config.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:network.operator.openshift.io/last-seen-state:
f:status:
f:conditions:
f:relatedObjects:
f:versions:
Manager: cluster-network-operator
Operation: Update
Time: 2021-07-05T04:49:02Z
Owner References:
API Version: config.openshift.io/v1
Kind: ClusterVersion
Name: version
UID: 45654938-949c-49d3-b57e-e54c82910ab0
Resource Version: 81439
UID: b9e4a440-a367-49f6-bf58-b07dea831009
Spec:
Status:
Conditions:
Last Transition Time: 2021-07-05T07:37:23Z
Message: The configuration is invalid for proxy 'cluster' (invalid httpProxy URI: parse "proxy-user1": invalid URI for request). Use 'oc edit proxy.config.openshift.io cluster' to fix.
Reason: InvalidProxyConfig
Status: True
Type: Degraded
Last Transition Time: 2021-07-05T04:48:01Z
Status: False
Type: ManagementStateDegraded
Last Transition Time: 2021-07-05T04:48:01Z
Status: True
Type: Upgradeable
Last Transition Time: 2021-07-05T04:58:52Z
Status: False
Type: Progressing
Last Transition Time: 2021-07-05T04:49:02Z
Status: True
Type: Available
Extension: <nil>
<--snip-->
[root@preserve-jialiu-ansible ~]# oc adm upgrade
Error while reconciling 4.9.0-0.nightly-2021-07-04-140102: the cluster operator network is degraded
warning: Cannot display available updates:
Reason: VersionNotFound
Message: Unable to retrieve available updates: currently reconciling cluster version 4.9.0-0.nightly-2021-07-04-140102 not found in the "stable-4.8" channel
From the warning message, did not see proxyconnect error.
Install a common install, check again, everything is going well as expection.
[root@preserve-jialiu-ansible ~]# oc get proxies.config.openshift.io cluster -o json
{
"apiVersion": "config.openshift.io/v1",
"kind": "Proxy",
"metadata": {
"creationTimestamp": "2021-07-05T08:39:40Z",
"generation": 2,
"name": "cluster",
"resourceVersion": "30752",
"uid": "87b6c43b-579b-4d79-a416-806a073055d0"
},
"spec": {
"httpProxy": "test",
"httpsProxy": "test",
"trustedCA": {
"name": ""
}
},
"status": {}
}
[root@preserve-jialiu-ansible ~]# oc adm upgrade
Cluster version is 4.8.0-rc.2
warning: Cannot display available updates:
Reason: VersionNotFound
Message: Unable to retrieve available updates: currently reconciling cluster version 4.8.0-rc.2 not found in the "stable-4.8" channel
[root@preserve-jialiu-ansible ~]# oc get co network
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
network 4.8.0-rc.2 True False True 28m
[root@preserve-jialiu-ansible ~]# oc describe co network
Name: network
Namespace:
Labels: <none>
Annotations: include.release.openshift.io/ibm-cloud-managed: true
include.release.openshift.io/self-managed-high-availability: true
include.release.openshift.io/single-node-developer: true
network.operator.openshift.io/last-seen-state: {"DaemonsetStates":[],"DeploymentStates":[]}
API Version: config.openshift.io/v1
Kind: ClusterOperator
Metadata:
Creation Timestamp: 2021-07-05T08:39:45Z
Generation: 1
Managed Fields:
API Version: config.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:include.release.openshift.io/ibm-cloud-managed:
f:include.release.openshift.io/self-managed-high-availability:
f:include.release.openshift.io/single-node-developer:
f:spec:
f:status:
.:
f:extension:
Manager: cluster-version-operator
Operation: Update
Time: 2021-07-05T08:39:45Z
API Version: config.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:network.operator.openshift.io/last-seen-state:
f:status:
f:conditions:
f:relatedObjects:
f:versions:
Manager: cluster-network-operator
Operation: Update
Time: 2021-07-05T08:45:54Z
Resource Version: 30754
UID: 9757be6f-922b-4905-a797-47bd396d1198
Spec:
Status:
Conditions:
Last Transition Time: 2021-07-05T09:14:10Z
Message: The configuration is invalid for proxy 'cluster' (invalid httpProxy URI: parse "test": invalid URI for request). Use 'oc edit proxy.config.openshift.io cluster' to fix.
Reason: InvalidProxyConfig
Status: True
Type: Degraded
Last Transition Time: 2021-07-05T08:45:13Z
Status: False
Type: ManagementStateDegraded
Last Transition Time: 2021-07-05T08:45:13Z
Status: True
Type: Upgradeable
Last Transition Time: 2021-07-05T08:54:04Z
Status: False
Type: Progressing
Last Transition Time: 2021-07-05T08:45:54Z
Status: True
Type: Available
Extension: <nil>
Related Objects:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759 |