Bug 1907202
Summary: | configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Andreas Karis <akaris> |
Component: | Image Registry | Assignee: | Ricardo Maraschini <rmarasch> |
Status: | CLOSED ERRATA | QA Contact: | Wenjing Zheng <wzheng> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.5 | CC: | aos-bugs, obulatov, rmarasch, xiuwang |
Target Milestone: | --- | Keywords: | UpcomingSprint |
Target Release: | 4.7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
Lack of update in config status during operator sync execution.
Consequence:
Config's status field was not presenting the most up to date (applied) swift configuration.
Fix:
Fixed the sync process to update config's status to config's spec values.
Result:
Spec and Status now are in sync with status presenting what is the current applied config.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-02-24 15:43:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1916857 |
Description
Andreas Karis
2020-12-13 15:45:11 UTC
The only way I found to update the status field is to trigger creation of a new container by deleting spec.storage.swift.container. If I delete that, a new container will be created in swift, but with possibly other undesired side effects. This is from a different deployment, but I followed the same procedure. CR configs.imageregistry.operator.openshift.io cluster will only update it's state if we delete the swift container spec in it (https://github.com/openshift/cluster-image-registry-operator/blob/d96a9e639acc07079a7eec73188ba07bfb3c6c8a/pkg/storage/swift/swift.go#L386): ~~~ (overcloud) [stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml apiVersion: v1 items: - apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: "2020-12-09T15:13:47Z" finalizers: - imageregistry.operator.openshift.io/finalizer generation: 7 managedFields: - apiVersion: imageregistry.operator.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:managementState: {} f:storage: f:swift: f:authURL: {} f:status: f:storage: f:swift: f:authURL: {} manager: oc operation: Update time: "2020-12-11T10:21:16Z" - apiVersion: imageregistry.operator.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"imageregistry.operator.openshift.io/finalizer": {} f:spec: .: {} f:logging: {} f:proxy: {} f:replicas: {} f:requests: .: {} f:read: .: {} f:maxWaitInQueue: {} f:write: .: {} f:maxWaitInQueue: {} f:rolloutStrategy: {} f:storage: .: {} f:swift: {} f:status: .: {} f:conditions: {} f:generations: {} f:observedGeneration: {} f:readyReplicas: {} f:storage: .: {} f:swift: .: {} f:authVersion: {} f:container: {} f:domain: {} f:regionName: {} f:tenant: {} f:tenantID: {} f:storageManaged: {} manager: cluster-image-registry-operator operation: Update time: "2020-12-12T16:39:31Z" name: cluster resourceVersion: "1394069" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: da79fc1d-4d23-43df-aabb-105053de7842 spec: httpSecret: a424c36f8edc9573c90d36cc4f87555f4aa018662c1ffaea6cc67a7d8b8cf6872c34fd863cc7c65d50660e6fc18d303a15dc6e5a295ef2fe319339f978f5987a logging: 2 managementState: Managed proxy: {} replicas: 2 requests: read: maxWaitInQueue: 0s write: maxWaitInQueue: 0s rolloutStrategy: RollingUpdate storage: swift: authURL: https://172.16.0.119:13000/v3 <--------------------------------------- authVersion: "3" container: cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx <----------------------------------- domain: Default regionName: regionOne tenant: admin tenantID: a9e7109ca48440848c6bc8c951d41aa8 status: conditions: - lastTransitionTime: "2020-12-12T16:39:31Z" message: User supplied container already exists reason: Container exists status: "True" type: StorageExists - lastTransitionTime: "2020-12-11T09:45:11Z" message: The registry is ready reason: Ready status: "True" type: Available - lastTransitionTime: "2020-12-12T15:13:34Z" message: The registry is ready reason: Ready status: "False" type: Progressing - lastTransitionTime: "2020-12-09T15:13:50Z" status: "False" type: Degraded - lastTransitionTime: "2020-12-09T15:13:50Z" status: "False" type: Removed - lastTransitionTime: "2020-12-09T15:13:51Z" reason: AsExpected status: "False" type: ImageRegistryCertificatesControllerDegraded - lastTransitionTime: "2020-12-09T15:13:51Z" reason: AsExpected status: "False" type: NodeCADaemonControllerDegraded - lastTransitionTime: "2020-12-09T15:20:12Z" reason: AsExpected status: "False" type: ImageConfigControllerDegraded generations: - group: apps hash: "" lastGeneration: 5 name: image-registry namespace: openshift-image-registry resource: deployments observedGeneration: 7 readyReplicas: 0 storage: swift: authURL: http://172.16.0.119:5000//v3 <----------------------------------- authVersion: "3" container: cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx <----------------------------------- domain: Default regionName: regionOne tenant: admin tenantID: a9e7109ca48440848c6bc8c951d41aa8 storageManaged: true kind: List metadata: resourceVersion: "" selfLink: "" (overcloud) [stack@undercloud-0 ~]$ oc get pods -A | grep cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx (overcloud) [stack@undercloud-0 ~]$ oc get pods -A | grep cluster-xbk9m-image-registry (overcloud) [stack@undercloud-0 ~]$ oc edit configs.imageregistry.operator.openshift.io cluster config.imageregistry.operator.openshift.io/cluster edited (overcloud) [stack@undercloud-0 ~]$ (overcloud) [stack@undercloud-0 ~]$ (overcloud) [stack@undercloud-0 ~]$ (overcloud) [stack@undercloud-0 ~]$ (overcloud) [stack@undercloud-0 ~]$ (overcloud) [stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml apiVersion: v1 items: - apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: "2020-12-09T15:13:47Z" finalizers: - imageregistry.operator.openshift.io/finalizer generation: 9 managedFields: - apiVersion: imageregistry.operator.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:managementState: {} f:storage: f:swift: f:authURL: {} manager: oc operation: Update time: "2020-12-11T10:21:16Z" - apiVersion: imageregistry.operator.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"imageregistry.operator.openshift.io/finalizer": {} f:spec: .: {} f:logging: {} f:proxy: {} f:replicas: {} f:requests: .: {} f:read: .: {} f:maxWaitInQueue: {} f:write: .: {} f:maxWaitInQueue: {} f:rolloutStrategy: {} f:storage: .: {} f:swift: {} f:status: .: {} f:conditions: {} f:generations: {} f:observedGeneration: {} f:readyReplicas: {} f:storage: .: {} f:swift: .: {} f:authURL: {} f:authVersion: {} f:container: {} f:domain: {} f:regionName: {} f:tenant: {} f:tenantID: {} f:storageManaged: {} manager: cluster-image-registry-operator operation: Update time: "2020-12-12T16:41:23Z" name: cluster resourceVersion: "1394686" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: da79fc1d-4d23-43df-aabb-105053de7842 spec: httpSecret: a424c36f8edc9573c90d36cc4f87555f4aa018662c1ffaea6cc67a7d8b8cf6872c34fd863cc7c65d50660e6fc18d303a15dc6e5a295ef2fe319339f978f5987a logging: 2 managementState: Managed proxy: {} replicas: 2 requests: read: maxWaitInQueue: 0s write: maxWaitInQueue: 0s rolloutStrategy: RollingUpdate storage: swift: authURL: https://172.16.0.119:13000/v3 <----------------------------------- authVersion: "3" container: cluster-xbk9m-image-registry-vwhxtdlywtlqovtudjlssjyqmrycgbqep <----------------------------------- domain: Default regionName: regionOne tenant: admin tenantID: a9e7109ca48440848c6bc8c951d41aa8 status: conditions: - lastTransitionTime: "2020-12-12T16:41:20Z" reason: Swift container Exists status: "True" type: StorageExists - lastTransitionTime: "2020-12-11T09:45:11Z" message: The registry has minimum availability reason: MinimumAvailability status: "True" type: Available - lastTransitionTime: "2020-12-12T16:41:20Z" message: The deployment has not completed reason: DeploymentNotCompleted status: "True" type: Progressing - lastTransitionTime: "2020-12-09T15:13:50Z" status: "False" type: Degraded - lastTransitionTime: "2020-12-09T15:13:50Z" status: "False" type: Removed - lastTransitionTime: "2020-12-09T15:13:51Z" reason: AsExpected status: "False" type: ImageRegistryCertificatesControllerDegraded - lastTransitionTime: "2020-12-09T15:13:51Z" reason: AsExpected status: "False" type: NodeCADaemonControllerDegraded - lastTransitionTime: "2020-12-09T15:20:12Z" reason: AsExpected status: "False" type: ImageConfigControllerDegraded generations: - group: apps hash: "" lastGeneration: 6 name: image-registry namespace: openshift-image-registry resource: deployments observedGeneration: 9 readyReplicas: 0 storage: swift: authURL: https://172.16.0.119:13000/v3 <----------------------------------- authVersion: "3" container: cluster-xbk9m-image-registry-vwhxtdlywtlqovtudjlssjyqmrycgbqep <----------------------------------- domain: Default regionName: regionOne tenant: admin tenantID: a9e7109ca48440848c6bc8c951d41aa8 storageManaged: true kind: List metadata: resourceVersion: "" selfLink: "" ~~~ ~~~ (overcloud) [stack@undercloud-0 ~]$ swift list cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx cluster-xbk9m-image-registry-vwhxtdlywtlqovtudjlssjyqmrycgbqep ~~~ But what are the consequences of this? Background: This is a valid use case. A customer deployed an OpenStack cloud with HTTP endpoints, and OpenShift authenticates with the HTTP endpoints. The customer then wants to change the OSP endpoints to SSL/TLS, from http://<url>:5000 to http://<url>:13000 for keystone. The cluster-image-registry-operator should correctly react to this and update the CRD's status. At the moment, it seems that we only update the status when we create storage container: https://github.com/openshift/cluster-image-registry-operator/blob/302d1347ea154c634683bddda7cd57366100ed80/pkg/storage/swift/swift.go#L438 Or when we remove storage: https://github.com/openshift/cluster-image-registry-operator/blob/302d1347ea154c634683bddda7cd57366100ed80/pkg/storage/swift/swift.go#L503 I am not sure if this is the relevant code here. But when an update is detected, we do not update the status.storage.swift: https://github.com/openshift/cluster-image-registry-operator/blob/302d1347ea154c634683bddda7cd57366100ed80/pkg/storage/swift/swift.go#L361 - Andreas Verified on 4.7.0-0.nightly-2021-01-19-095812: if make changes on spec.storage.swift correctly, changes will be reflected in status.storage.swift; if make invalid change on spec.storage.swift, it will not be reflected. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633 |