Bug 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change
Summary: configs.imageregistry.operator.openshift.io cluster does not update its statu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Ricardo Maraschini
QA Contact: Wenjing Zheng
URL:
Whiteboard:
Depends On:
Blocks: 1916857
TreeView+ depends on / blocked
 
Reported: 2020-12-13 15:45 UTC by Andreas Karis
Modified: 2021-02-24 15:43 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Lack of update in config status during operator sync execution. Consequence: Config's status field was not presenting the most up to date (applied) swift configuration. Fix: Fixed the sync process to update config's status to config's spec values. Result: Spec and Status now are in sync with status presenting what is the current applied config.
Clone Of:
Environment:
Last Closed: 2021-02-24 15:43:00 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-image-registry-operator pull 653 0 None closed Bug 1907202: Sync status to spec with regards to Swift config 2021-02-14 23:17:04 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:43:26 UTC

Description Andreas Karis 2020-12-13 15:45:11 UTC
When changing the swift storage's URL, e.g. DNS, protocol, port ... 
CR configs.imageregistry.operator.openshift.io cluster *does* react to the change and update associated pods. But it *does not* update its own status authURL field.

[stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml
apiVersion: v1
items:
- apiVersion: imageregistry.operator.openshift.io/v1
  kind: Config
  metadata:
    creationTimestamp: "2020-12-13T08:55:48Z"
    finalizers:
    - imageregistry.operator.openshift.io/finalizer
    generation: 2
    managedFields:
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"imageregistry.operator.openshift.io/finalizer": {}
        f:spec:
          .: {}
          f:logging: {}
          f:managementState: {}
          f:proxy: {}
          f:replicas: {}
          f:requests:
            .: {}
            f:read:
              .: {}
              f:maxWaitInQueue: {}
            f:write:
              .: {}
              f:maxWaitInQueue: {}
          f:rolloutStrategy: {}
          f:storage:
            .: {}
            f:swift: {}
        f:status:
          .: {}
          f:conditions: {}
          f:generations: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:storage:
            .: {}
            f:swift:
              .: {}
              f:authURL: {}
              f:authVersion: {}
              f:container: {}
              f:domain: {}
              f:regionName: {}
              f:tenant: {}
              f:tenantID: {}
          f:storageManaged: {}
      manager: cluster-image-registry-operator
      operation: Update
      time: "2020-12-13T15:26:28Z"
    name: cluster
    resourceVersion: "152967"
    selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    uid: 1befdb57-6053-4030-9989-4624156b3f89
  spec:
    httpSecret: 30153df236995eb1d7b4f029c468b9e341e4ce0b87e509964979be5f5b82cb54450c9886ddb3cdc0541d3b035f51f26209b1e7a83684c2dfb03403eea9f798b1
    logging: 2
    managementState: Managed
    proxy: {}
    replicas: 2
    requests:
      read:
        maxWaitInQueue: 0s
      write:
        maxWaitInQueue: 0s
    rolloutStrategy: RollingUpdate
    storage:
      swift:
        authURL: http://172.16.0.145:5000/v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
  status:
    conditions:
    - lastTransitionTime: "2020-12-13T15:26:29Z"
      message: 'Failed to authenticate provider client: Post http://172.16.0.145:5000/v3/auth/tokens:
        dial tcp 172.16.0.145:5000: connect: connection refused'
      reason: 'Failed to authenticate provider client: Post http://172.16.0.145:5000/v3/auth/tokens:
        dial tcp 172.16.0.145:5000: connect: connection refused'
      status: "False"
      type: StorageExists
    - lastTransitionTime: "2020-12-13T14:53:47Z"
      message: The deployment does not have available replicas
      reason: NoReplicasAvailable
      status: "False"
      type: Available
    - lastTransitionTime: "2020-12-13T13:50:58Z"
      message: 'Unable to apply resources: unable to sync storage configuration: Failed
        to authenticate provider client: Post http://172.16.0.145:5000/v3/auth/tokens:
        dial tcp 172.16.0.145:5000: connect: connection refused'
      reason: Error
      status: "True"
      type: Progressing
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Degraded
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Removed
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      reason: AsExpected
      status: "False"
      type: NodeCADaemonControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:57Z"
      reason: AsExpected
      status: "False"
      type: ImageRegistryCertificatesControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:58Z"
      reason: AsExpected
      status: "False"
      type: ImageConfigControllerDegraded
    generations:
    - group: apps
      hash: ""
      lastGeneration: 2
      name: image-registry
      namespace: openshift-image-registry
      resource: deployments
    observedGeneration: 2
    readyReplicas: 0
    storage:
      swift:
        authURL: http://172.16.0.145:5000//v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
    storageManaged: true
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
~~~

~~~
[stack@undercloud-0 ~]$ oc edit configs.imageregistry.operator.openshift.io  cluster
config.imageregistry.operator.openshift.io/cluster edited
~~~

~~~
[stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml
apiVersion: v1
items:
- apiVersion: imageregistry.operator.openshift.io/v1
  kind: Config
  metadata:
    creationTimestamp: "2020-12-13T08:55:48Z"
    finalizers:
    - imageregistry.operator.openshift.io/finalizer
    generation: 3
    managedFields:
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"imageregistry.operator.openshift.io/finalizer": {}
        f:spec:
          .: {}
          f:logging: {}
          f:managementState: {}
          f:proxy: {}
          f:replicas: {}
          f:requests:
            .: {}
            f:read:
              .: {}
              f:maxWaitInQueue: {}
            f:write:
              .: {}
              f:maxWaitInQueue: {}
          f:rolloutStrategy: {}
          f:storage:
            .: {}
            f:swift: {}
        f:status:
          .: {}
          f:conditions: {}
          f:generations: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:storage:
            .: {}
            f:swift:
              .: {}
              f:authURL: {}
              f:authVersion: {}
              f:container: {}
              f:domain: {}
              f:regionName: {}
              f:tenant: {}
              f:tenantID: {}
          f:storageManaged: {}
      manager: cluster-image-registry-operator
      operation: Update
      time: "2020-12-13T15:28:05Z"
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:storage:
            f:swift:
              f:authURL: {}
      manager: oc
      operation: Update
      time: "2020-12-13T15:28:05Z"
    name: cluster
    resourceVersion: "153456"
    selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    uid: 1befdb57-6053-4030-9989-4624156b3f89
  spec:
    httpSecret: 30153df236995eb1d7b4f029c468b9e341e4ce0b87e509964979be5f5b82cb54450c9886ddb3cdc0541d3b035f51f26209b1e7a83684c2dfb03403eea9f798b1
    logging: 2
    managementState: Managed
    proxy: {}
    replicas: 2
    requests:
      read:
        maxWaitInQueue: 0s
      write:
        maxWaitInQueue: 0s
    rolloutStrategy: RollingUpdate
    storage:
      swift:
        authURL: https://172.16.0.145:5000/v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
  status:
    conditions:
    - lastTransitionTime: "2020-12-13T15:28:06Z"
      message: 'Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens:
        dial tcp 172.16.0.145:5000: connect: connection refused'
      reason: 'Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens:
        dial tcp 172.16.0.145:5000: connect: connection refused'
      status: "False"
      type: StorageExists
    - lastTransitionTime: "2020-12-13T14:53:47Z"
      message: The deployment does not have available replicas
      reason: NoReplicasAvailable
      status: "False"
      type: Available
    - lastTransitionTime: "2020-12-13T13:50:58Z"
      message: 'Unable to apply resources: unable to sync storage configuration: Failed
        to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens:
        dial tcp 172.16.0.145:5000: connect: connection refused'
      reason: Error
      status: "True"
      type: Progressing
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Degraded
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Removed
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      reason: AsExpected
      status: "False"
      type: NodeCADaemonControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:57Z"
      reason: AsExpected
      status: "False"
      type: ImageRegistryCertificatesControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:58Z"
      reason: AsExpected
      status: "False"
      type: ImageConfigControllerDegraded
    generations:
    - group: apps
      hash: ""
      lastGeneration: 2
      name: image-registry
      namespace: openshift-image-registry
      resource: deployments
    observedGeneration: 3
    readyReplicas: 0
    storage:
      swift:
        authURL: http://172.16.0.145:5000//v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
    storageManaged: true
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
~~~

~~~
[stack@undercloud-0 ~]$ oc edit configs.imageregistry.operator.openshift.io  cluster
config.imageregistry.operator.openshift.io/cluster edited
~~~

~~~
(reverse-i-search)`e': oc edit configs.imageregistry.operator.openshift.io  clust^C
[stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml
apiVersion: v1
items:
- apiVersion: imageregistry.operator.openshift.io/v1
  kind: Config
  metadata:
    creationTimestamp: "2020-12-13T08:55:48Z"
    finalizers:
    - imageregistry.operator.openshift.io/finalizer
    generation: 4
    managedFields:
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:storage:
            f:swift:
              f:authURL: {}
      manager: oc
      operation: Update
      time: "2020-12-13T15:28:36Z"
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"imageregistry.operator.openshift.io/finalizer": {}
        f:spec:
          .: {}
          f:logging: {}
          f:managementState: {}
          f:proxy: {}
          f:replicas: {}
          f:requests:
            .: {}
            f:read:
              .: {}
              f:maxWaitInQueue: {}
            f:write:
              .: {}
              f:maxWaitInQueue: {}
          f:rolloutStrategy: {}
          f:storage:
            .: {}
            f:swift: {}
        f:status:
          .: {}
          f:conditions: {}
          f:generations: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:storage:
            .: {}
            f:swift:
              .: {}
              f:authURL: {}
              f:authVersion: {}
              f:container: {}
              f:domain: {}
              f:regionName: {}
              f:tenant: {}
              f:tenantID: {}
          f:storageManaged: {}
      manager: cluster-image-registry-operator
      operation: Update
      time: "2020-12-13T15:28:39Z"
    name: cluster
    resourceVersion: "153621"
    selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    uid: 1befdb57-6053-4030-9989-4624156b3f89
  spec:
    httpSecret: 30153df236995eb1d7b4f029c468b9e341e4ce0b87e509964979be5f5b82cb54450c9886ddb3cdc0541d3b035f51f26209b1e7a83684c2dfb03403eea9f798b1
    logging: 2
    managementState: Managed
    proxy: {}
    replicas: 2
    requests:
      read:
        maxWaitInQueue: 0s
      write:
        maxWaitInQueue: 0s
    rolloutStrategy: RollingUpdate
    storage:
      swift:
        authURL: https://172.16.0.145:13000/v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
  status:
    conditions:
    - lastTransitionTime: "2020-12-13T15:28:40Z"
      message: User supplied container already exists
      reason: Container exists
      status: "True"
      type: StorageExists
    - lastTransitionTime: "2020-12-13T14:53:47Z"
      message: The deployment does not have available replicas
      reason: NoReplicasAvailable
      status: "False"
      type: Available
    - lastTransitionTime: "2020-12-13T13:50:58Z"
      message: The deployment has not completed
      reason: DeploymentNotCompleted
      status: "True"
      type: Progressing
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Degraded
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Removed
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      reason: AsExpected
      status: "False"
      type: NodeCADaemonControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:57Z"
      reason: AsExpected
      status: "False"
      type: ImageRegistryCertificatesControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:58Z"
      reason: AsExpected
      status: "False"
      type: ImageConfigControllerDegraded
    generations:
    - group: apps
      hash: ""
      lastGeneration: 3
      name: image-registry
      namespace: openshift-image-registry
      resource: deployments
    observedGeneration: 4
    readyReplicas: 0
    storage:
      swift:
        authURL: http://172.16.0.145:5000//v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
    storageManaged: true
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
[stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml
apiVersion: v1
items:
- apiVersion: imageregistry.operator.openshift.io/v1
  kind: Config
  metadata:
    creationTimestamp: "2020-12-13T08:55:48Z"
    finalizers:
    - imageregistry.operator.openshift.io/finalizer
    generation: 4
    managedFields:
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:storage:
            f:swift:
              f:authURL: {}
      manager: oc
      operation: Update
      time: "2020-12-13T15:28:36Z"
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"imageregistry.operator.openshift.io/finalizer": {}
        f:spec:
          .: {}
          f:logging: {}
          f:managementState: {}
          f:proxy: {}
          f:replicas: {}
          f:requests:
            .: {}
            f:read:
              .: {}
              f:maxWaitInQueue: {}
            f:write:
              .: {}
              f:maxWaitInQueue: {}
          f:rolloutStrategy: {}
          f:storage:
            .: {}
            f:swift: {}
        f:status:
          .: {}
          f:conditions: {}
          f:generations: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:storage:
            .: {}
            f:swift:
              .: {}
              f:authURL: {}
              f:authVersion: {}
              f:container: {}
              f:domain: {}
              f:regionName: {}
              f:tenant: {}
              f:tenantID: {}
          f:storageManaged: {}
      manager: cluster-image-registry-operator
      operation: Update
      time: "2020-12-13T15:31:51Z"
    name: cluster
    resourceVersion: "154711"
    selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    uid: 1befdb57-6053-4030-9989-4624156b3f89
  spec:
    httpSecret: 30153df236995eb1d7b4f029c468b9e341e4ce0b87e509964979be5f5b82cb54450c9886ddb3cdc0541d3b035f51f26209b1e7a83684c2dfb03403eea9f798b1
    logging: 2
    managementState: Managed
    proxy: {}
    replicas: 2
    requests:
      read:
        maxWaitInQueue: 0s
      write:
        maxWaitInQueue: 0s
    rolloutStrategy: RollingUpdate
    storage:
      swift:
        authURL: https://172.16.0.145:13000/v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
  status:
    conditions:
    - lastTransitionTime: "2020-12-13T15:31:52Z"
      message: User supplied container already exists
      reason: Container exists
      status: "True"
      type: StorageExists
    - lastTransitionTime: "2020-12-13T15:28:54Z"
      message: The registry is ready
      reason: Ready
      status: "True"
      type: Available
    - lastTransitionTime: "2020-12-13T15:29:05Z"
      message: The registry is ready
      reason: Ready
      status: "False"
      type: Progressing
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Degraded
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      status: "False"
      type: Removed
    - lastTransitionTime: "2020-12-13T08:55:50Z"
      reason: AsExpected
      status: "False"
      type: NodeCADaemonControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:57Z"
      reason: AsExpected
      status: "False"
      type: ImageRegistryCertificatesControllerDegraded
    - lastTransitionTime: "2020-12-13T08:56:58Z"
      reason: AsExpected
      status: "False"
      type: ImageConfigControllerDegraded
    generations:
    - group: apps
      hash: ""
      lastGeneration: 3
      name: image-registry
      namespace: openshift-image-registry
      resource: deployments
    observedGeneration: 4
    readyReplicas: 0
    storage:
      swift:
        authURL: http://172.16.0.145:5000//v3
        authVersion: "3"
        container: cluster-c5pvb-image-registry-kkqvicsbhtwppjssdqctofkqtemrrsffy
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a19242ac5e3d4687838a00501c26e544
    storageManaged: true
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
~~~

~~~
I1213 15:28:40.239252      13 recorder_logging.go:37] &Event{ObjectMeta:{dummy.1650504410ad29f4  dummy    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/image-registry -n openshift-image-registry because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2020-12-13 15:28:40.239049204 +0000 UTC m=+541.735546929,LastTimestamp:2020-12-13 15:28:40.239049204 +0000 UTC m=+541.735546929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}
I1213 15:28:40.244702      13 generator.go:60] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:59034deee9c1c1b51b4cf81012193ae2d661a693f570d4e510beb465774c3321" -> "sha256:2e16929004878785f62c52b78da4d9ef06fd7f8e36b436a2c51572f3a875a138"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"ca67e72705f3672329f559df425174e099c2c8ff6c4a0fd822567dcad170df29" -> "4c8c51059e60095b7aeae46d05a99ac069ab299ee419d1cfcffceec3abd9a340"}, changed:metadata.generation={"2.000000" -> "3.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, changed:metadata.managedFields.0.time={"2020-12-13T08:56:59Z" -> "2020-12-13T15:26:28Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, changed:metadata.managedFields.1.time={"2020-12-13T15:26:28Z" -> "2020-12-13T15:28:39Z"}, changed:metadata.resourceVersion={"152966" -> "153620"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:b11818221942d8ac70f233d376671132798230f6a04474b8870c2aee3254fa59" -> "sha256:4d6d7a131097a34087e980676ccbfd6f11bcdc70396c71a7e3520e8a3746ce4b"}, changed:spec.template.spec.containers.0.env.2.value={"http://172.16.0.145:5000/v3" -> "https://172.16.0.145:13000/v3"}
I1213 15:28:40.245964      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:06Z" -> "2020-12-13T15:28:40Z"}, changed:status.conditions.0.message={"Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "User supplied container already exists"}, changed:status.conditions.0.reason={"Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "Container exists"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.2.message={"Unable to apply resources: unable to sync storage configuration: Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "The deployment has not completed"}, changed:status.conditions.2.reason={"Error" -> "DeploymentNotCompleted"}, changed:status.generations.0.lastGeneration={"2.000000" -> "3.000000"}, changed:status.observedGeneration={"3.000000" -> "4.000000"}
I1213 15:28:40.247436      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:40.255015      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:40.257498      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:40.265907      13 generator.go:60] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.1.time={"2020-12-13T15:28:05Z" -> "2020-12-13T15:28:39Z"}, changed:metadata.resourceVersion={"153457" -> "153623"}, changed:metadata.selfLink={"/apis/config.openshift.io/v1/clusteroperators/image-registry" -> "/apis/config.openshift.io/v1/clusteroperators/image-registry/status"}, changed:status.conditions.1.message={"Progressing: Unable to apply resources: unable to sync storage configuration: Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "Progressing: The deployment has not completed"}, changed:status.conditions.1.reason={"Error" -> "DeploymentNotCompleted"}
I1213 15:28:40.266028      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:40.266200      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:40.274014      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:40.290435      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:40.320831      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:42.105763      13 recorder_logging.go:37] &Event{ObjectMeta:{dummy.165050447fedcaad  dummy    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentUpdated,Message:Updated Deployment.apps/image-registry -n openshift-image-registry because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2020-12-13 15:28:42.105555629 +0000 UTC m=+543.602053395,LastTimestamp:2020-12-13 15:28:42.105555629 +0000 UTC m=+543.602053395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}
I1213 15:28:42.108617      13 generator.go:60] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: 
I1213 15:28:42.109591      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:06Z" -> "2020-12-13T15:28:42Z"}, changed:status.conditions.0.message={"Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "User supplied container already exists"}, changed:status.conditions.0.reason={"Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "Container exists"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.2.message={"Unable to apply resources: unable to sync storage configuration: Failed to authenticate provider client: Post https://172.16.0.145:5000/v3/auth/tokens: dial tcp 172.16.0.145:5000: connect: connection refused" -> "The deployment has not completed"}, changed:status.conditions.2.reason={"Error" -> "DeploymentNotCompleted"}, changed:status.generations.0.lastGeneration={"2.000000" -> "3.000000"}, changed:status.observedGeneration={"3.000000" -> "4.000000"}
E1213 15:28:42.116311      13 controller.go:330] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing
I1213 15:28:44.004384      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:40Z" -> "2020-12-13T15:28:43Z"}
I1213 15:28:44.016445      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:44.018469      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:44.018756      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:46.002222      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:40Z" -> "2020-12-13T15:28:45Z"}
E1213 15:28:46.009202      13 controller.go:330] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing
I1213 15:28:48.020976      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:43Z" -> "2020-12-13T15:28:48Z"}
I1213 15:28:48.031420      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:48.032032      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:48.032291      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:50.352601      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:50.377346      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:50.386404      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:50.416709      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:51.349328      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:43Z" -> "2020-12-13T15:28:51Z"}, changed:status.conditions.1.lastTransitionTime={"2020-12-13T14:53:47Z" -> "2020-12-13T15:28:51Z"}, changed:status.conditions.1.message={"The deployment does not have available replicas" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.1.status={"False" -> "True"}
E1213 15:28:51.356132      13 controller.go:330] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing
I1213 15:28:54.444322      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:48Z" -> "2020-12-13T15:28:54Z"}, changed:status.conditions.1.lastTransitionTime={"2020-12-13T14:53:47Z" -> "2020-12-13T15:28:54Z"}, changed:status.conditions.1.message={"The deployment does not have available replicas" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.1.status={"False" -> "True"}
I1213 15:28:54.460590      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:54.461200      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:54.467533      13 generator.go:60] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.1.time={"2020-12-13T15:28:39Z" -> "2020-12-13T15:28:53Z"}, changed:metadata.resourceVersion={"153623" -> "153743"}, changed:metadata.selfLink={"/apis/config.openshift.io/v1/clusteroperators/image-registry" -> "/apis/config.openshift.io/v1/clusteroperators/image-registry/status"}, changed:status.conditions.0.lastTransitionTime={"2020-12-13T14:53:47Z" -> "2020-12-13T15:28:54Z"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.0.status={"False" -> "True"}
I1213 15:28:54.467600      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:54.467734      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:56.404234      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:54Z" -> "2020-12-13T15:28:56Z"}
I1213 15:28:56.413391      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:56.414799      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:56.415053      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:57.895031      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:56Z" -> "2020-12-13T15:28:57Z"}
I1213 15:28:57.904190      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:57.904595      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:28:57.904635      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:59.824311      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:57Z" -> "2020-12-13T15:28:59Z"}
I1213 15:28:59.834096      13 controller.go:333] event from workqueue successfully processed
I1213 15:28:59.834275      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:28:59.834416      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:29:01.704098      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:28:59Z" -> "2020-12-13T15:29:01Z"}
I1213 15:29:01.717273      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:29:01.717480      13 controller.go:333] event from workqueue successfully processed
I1213 15:29:01.720657      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:29:03.757318      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:29:01Z" -> "2020-12-13T15:29:03Z"}
I1213 15:29:03.767238      13 controller.go:333] event from workqueue successfully processed
I1213 15:29:03.768012      13 controllerimagepruner.go:316] event from image pruner workqueue successfully processed
I1213 15:29:03.768776      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:29:05.318755      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:29:05.332901      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:29:05.349543      13 clusteroperator.go:99] event from workqueue successfully processed
I1213 15:29:05.957726      13 controller.go:291] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2020-12-13T15:29:03Z" -> "2020-12-13T15:29:05Z"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.2.lastTransitionTime={"2020-12-13T13:50:58Z" -> "2020-12-13T15:29:05Z"}, changed:status.conditions.2.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.2.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.2.status={"True" -> "False"}
~~~

Comment 1 Andreas Karis 2020-12-13 15:49:10 UTC
The only way I found to update the status field is to trigger creation of a new container by deleting spec.storage.swift.container. If I delete that, a new container will be created in swift, but with possibly other undesired side effects.

This is from a different deployment, but I followed the same procedure.

CR configs.imageregistry.operator.openshift.io cluster will only update it's state if we delete the swift container spec in it (https://github.com/openshift/cluster-image-registry-operator/blob/d96a9e639acc07079a7eec73188ba07bfb3c6c8a/pkg/storage/swift/swift.go#L386):
~~~
(overcloud) [stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml
apiVersion: v1
items:
- apiVersion: imageregistry.operator.openshift.io/v1
  kind: Config
  metadata:
    creationTimestamp: "2020-12-09T15:13:47Z"
    finalizers:
    - imageregistry.operator.openshift.io/finalizer
    generation: 7
    managedFields:
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:managementState: {}
          f:storage:
            f:swift:
              f:authURL: {}
        f:status:
          f:storage:
            f:swift:
              f:authURL: {}
      manager: oc
      operation: Update
      time: "2020-12-11T10:21:16Z"
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"imageregistry.operator.openshift.io/finalizer": {}
        f:spec:
          .: {}
          f:logging: {}
          f:proxy: {}
          f:replicas: {}
          f:requests:
            .: {}
            f:read:
              .: {}
              f:maxWaitInQueue: {}
            f:write:
              .: {}
              f:maxWaitInQueue: {}
          f:rolloutStrategy: {}
          f:storage:
            .: {}
            f:swift: {}
        f:status:
          .: {}
          f:conditions: {}
          f:generations: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:storage:
            .: {}
            f:swift:
              .: {}
              f:authVersion: {}
              f:container: {}
              f:domain: {}
              f:regionName: {}
              f:tenant: {}
              f:tenantID: {}
          f:storageManaged: {}
      manager: cluster-image-registry-operator
      operation: Update
      time: "2020-12-12T16:39:31Z"
    name: cluster
    resourceVersion: "1394069"
    selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    uid: da79fc1d-4d23-43df-aabb-105053de7842
  spec:
    httpSecret: a424c36f8edc9573c90d36cc4f87555f4aa018662c1ffaea6cc67a7d8b8cf6872c34fd863cc7c65d50660e6fc18d303a15dc6e5a295ef2fe319339f978f5987a
    logging: 2
    managementState: Managed
    proxy: {}
    replicas: 2
    requests:
      read:
        maxWaitInQueue: 0s
      write:
        maxWaitInQueue: 0s
    rolloutStrategy: RollingUpdate
    storage:
      swift:
        authURL: https://172.16.0.119:13000/v3   <---------------------------------------
        authVersion: "3"
        container: cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx  <-----------------------------------
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a9e7109ca48440848c6bc8c951d41aa8
  status:
    conditions:
    - lastTransitionTime: "2020-12-12T16:39:31Z"
      message: User supplied container already exists
      reason: Container exists
      status: "True"
      type: StorageExists
    - lastTransitionTime: "2020-12-11T09:45:11Z"
      message: The registry is ready
      reason: Ready
      status: "True"
      type: Available
    - lastTransitionTime: "2020-12-12T15:13:34Z"
      message: The registry is ready
      reason: Ready
      status: "False"
      type: Progressing
    - lastTransitionTime: "2020-12-09T15:13:50Z"
      status: "False"
      type: Degraded
    - lastTransitionTime: "2020-12-09T15:13:50Z"
      status: "False"
      type: Removed
    - lastTransitionTime: "2020-12-09T15:13:51Z"
      reason: AsExpected
      status: "False"
      type: ImageRegistryCertificatesControllerDegraded
    - lastTransitionTime: "2020-12-09T15:13:51Z"
      reason: AsExpected
      status: "False"
      type: NodeCADaemonControllerDegraded
    - lastTransitionTime: "2020-12-09T15:20:12Z"
      reason: AsExpected
      status: "False"
      type: ImageConfigControllerDegraded
    generations:
    - group: apps
      hash: ""
      lastGeneration: 5
      name: image-registry
      namespace: openshift-image-registry
      resource: deployments
    observedGeneration: 7
    readyReplicas: 0
    storage:
      swift:
        authURL: http://172.16.0.119:5000//v3   <-----------------------------------
        authVersion: "3"
        container: cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx   <-----------------------------------
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a9e7109ca48440848c6bc8c951d41aa8
    storageManaged: true
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
(overcloud) [stack@undercloud-0 ~]$ oc get pods -A | grep cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx
(overcloud) [stack@undercloud-0 ~]$ oc get pods -A | grep  cluster-xbk9m-image-registry
(overcloud) [stack@undercloud-0 ~]$ oc edit configs.imageregistry.operator.openshift.io  cluster
config.imageregistry.operator.openshift.io/cluster edited
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ 
(overcloud) [stack@undercloud-0 ~]$ oc get configs.imageregistry.operator.openshift.io -o yaml
apiVersion: v1
items:
- apiVersion: imageregistry.operator.openshift.io/v1
  kind: Config
  metadata:
    creationTimestamp: "2020-12-09T15:13:47Z"
    finalizers:
    - imageregistry.operator.openshift.io/finalizer
    generation: 9
    managedFields:
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:managementState: {}
          f:storage:
            f:swift:
              f:authURL: {}
      manager: oc
      operation: Update
      time: "2020-12-11T10:21:16Z"
    - apiVersion: imageregistry.operator.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"imageregistry.operator.openshift.io/finalizer": {}
        f:spec:
          .: {}
          f:logging: {}
          f:proxy: {}
          f:replicas: {}
          f:requests:
            .: {}
            f:read:
              .: {}
              f:maxWaitInQueue: {}
            f:write:
              .: {}
              f:maxWaitInQueue: {}
          f:rolloutStrategy: {}
          f:storage:
            .: {}
            f:swift: {}
        f:status:
          .: {}
          f:conditions: {}
          f:generations: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:storage:
            .: {}
            f:swift:
              .: {}
              f:authURL: {}
              f:authVersion: {}
              f:container: {}
              f:domain: {}
              f:regionName: {}
              f:tenant: {}
              f:tenantID: {}
          f:storageManaged: {}
      manager: cluster-image-registry-operator
      operation: Update
      time: "2020-12-12T16:41:23Z"
    name: cluster
    resourceVersion: "1394686"
    selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    uid: da79fc1d-4d23-43df-aabb-105053de7842
  spec:
    httpSecret: a424c36f8edc9573c90d36cc4f87555f4aa018662c1ffaea6cc67a7d8b8cf6872c34fd863cc7c65d50660e6fc18d303a15dc6e5a295ef2fe319339f978f5987a
    logging: 2
    managementState: Managed
    proxy: {}
    replicas: 2
    requests:
      read:
        maxWaitInQueue: 0s
      write:
        maxWaitInQueue: 0s
    rolloutStrategy: RollingUpdate
    storage:
      swift:
        authURL: https://172.16.0.119:13000/v3                      <-----------------------------------
        authVersion: "3"
        container: cluster-xbk9m-image-registry-vwhxtdlywtlqovtudjlssjyqmrycgbqep      <-----------------------------------
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a9e7109ca48440848c6bc8c951d41aa8
  status:
    conditions:
    - lastTransitionTime: "2020-12-12T16:41:20Z"
      reason: Swift container Exists
      status: "True"
      type: StorageExists
    - lastTransitionTime: "2020-12-11T09:45:11Z"
      message: The registry has minimum availability
      reason: MinimumAvailability
      status: "True"
      type: Available
    - lastTransitionTime: "2020-12-12T16:41:20Z"
      message: The deployment has not completed
      reason: DeploymentNotCompleted
      status: "True"
      type: Progressing
    - lastTransitionTime: "2020-12-09T15:13:50Z"
      status: "False"
      type: Degraded
    - lastTransitionTime: "2020-12-09T15:13:50Z"
      status: "False"
      type: Removed
    - lastTransitionTime: "2020-12-09T15:13:51Z"
      reason: AsExpected
      status: "False"
      type: ImageRegistryCertificatesControllerDegraded
    - lastTransitionTime: "2020-12-09T15:13:51Z"
      reason: AsExpected
      status: "False"
      type: NodeCADaemonControllerDegraded
    - lastTransitionTime: "2020-12-09T15:20:12Z"
      reason: AsExpected
      status: "False"
      type: ImageConfigControllerDegraded
    generations:
    - group: apps
      hash: ""
      lastGeneration: 6
      name: image-registry
      namespace: openshift-image-registry
      resource: deployments
    observedGeneration: 9
    readyReplicas: 0
    storage:
      swift:
        authURL: https://172.16.0.119:13000/v3           <-----------------------------------
        authVersion: "3"
        container: cluster-xbk9m-image-registry-vwhxtdlywtlqovtudjlssjyqmrycgbqep    <-----------------------------------
        domain: Default
        regionName: regionOne
        tenant: admin
        tenantID: a9e7109ca48440848c6bc8c951d41aa8
    storageManaged: true
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
~~~

~~~
(overcloud) [stack@undercloud-0 ~]$ swift list
cluster-xbk9m-image-registry-rqtyhaacqynxjmhxhukhhvcbqetgtnqwx
cluster-xbk9m-image-registry-vwhxtdlywtlqovtudjlssjyqmrycgbqep
~~~

But what are the consequences of this?

Comment 2 Andreas Karis 2020-12-13 15:55:28 UTC
Background: This is a valid use case. A customer deployed an OpenStack cloud with HTTP endpoints, and OpenShift authenticates with the HTTP endpoints. The customer then wants to change the OSP endpoints to SSL/TLS, from http://<url>:5000 to http://<url>:13000 for keystone. The cluster-image-registry-operator should correctly react to this and update the CRD's status.

At the moment, it seems that we only update the status when we create storage container:
https://github.com/openshift/cluster-image-registry-operator/blob/302d1347ea154c634683bddda7cd57366100ed80/pkg/storage/swift/swift.go#L438

Or when we remove storage:
https://github.com/openshift/cluster-image-registry-operator/blob/302d1347ea154c634683bddda7cd57366100ed80/pkg/storage/swift/swift.go#L503

I am not sure if this is the relevant code here. But when an update is detected, we do not update the status.storage.swift:
https://github.com/openshift/cluster-image-registry-operator/blob/302d1347ea154c634683bddda7cd57366100ed80/pkg/storage/swift/swift.go#L361

- Andreas

Comment 6 Wenjing Zheng 2021-01-20 09:08:05 UTC
Verified on 4.7.0-0.nightly-2021-01-19-095812:
if make changes on spec.storage.swift correctly, changes will be reflected in status.storage.swift; if make invalid change on spec.storage.swift, it will not be reflected.

Comment 9 errata-xmlrpc 2021-02-24 15:43:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.