Bug 1691649 - Warning like unable to sync appears in registry pod log after changes to registry config
Summary: Warning like unable to sync appears in registry pod log after changes to regi...
Keywords:
Status: CLOSED DUPLICATE of bug 1816656
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.5.0
Assignee: Ricardo Maraschini
QA Contact: Wenjing Zheng
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-22 07:34 UTC by Wenjing Zheng
Modified: 2024-03-25 15:15 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-08 08:57:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Wenjing Zheng 2019-03-22 07:34:16 UTC
Description of problem:
There is always below warning after make changes to registry config, even the change takes effects:
E0322 07:20:05.530647       1 controller.go:222] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing

Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-03-21-164511

How reproducible:
always

Steps to Reproduce:
1.Make changes to registry config
2.Check registry operator log
3.

Actual results:
Always warning like description.

Expected results:
Should be no such warning

Additional info:

Comment 1 Oleg Bulatov 2019-05-02 12:18:51 UTC
This error appears because the operator gets its changes from watchers, and it gets its own changes with a delay. We can just suppress logging of this error.

Comment 3 Anand Paladugu 2020-05-05 14:36:01 UTC
Hi Oleg

Another customer is seeing this issue after updating the config.  The case is 02619097.

The change was done to fix a typo they made for the swift storage and change seems to have taken effect despite the error.

Is the error harmless?  Would upgrading the cluster to the next version possibly make the error go away?

Also, I noticed that the target release for this BZ is now pushed to 4.4.0.  Is this going to be fixed in 4.4.0?

Thanks

Anand

Comment 4 Oleg Bulatov 2020-05-05 15:53:10 UTC
Yes, the error is harmless (unless you see it constantly repeated every few minutes).

The operator should observe the latest version of the cluster object on the next iteration and continue functioning in a normal state.

Comment 5 Anand Paladugu 2020-05-05 17:29:33 UTC
Thanks, Oleg, the customer is seeing it at random intervals.  In once instance, we saw it happening after 2 seconds and in another after a few minutes.

What is "next iteration" ?  the next time operator syncs the config?

Comment 6 Oleg Bulatov 2020-05-05 18:16:15 UTC
Yes, and the operator sync the configs every ~10 minutes or if one of dependent resources is changed.

Comment 7 Anand Paladugu 2020-05-05 19:24:35 UTC
Thanks, Oleg.   The customer did the config change 2 days ago and they just confirmed that they are still seeing the error message at very short intervals like every few seconds.  Any way to work around to mitigate the issue ?

Comment 8 Oleg Bulatov 2020-05-05 20:17:08 UTC
We are not aware of such problems. Please collect the version of OCP, the operator logs and `oc get config.imageregistry/cluster -o yaml`.

Comment 9 Anand Paladugu 2020-05-06 12:57:37 UTC
Cluster version is 4.3.0

$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.3.0     True        False         74d     Cluster version is 4.3.0

Logs and config are attached to the case 02619097.

Comment 14 Ricardo Maraschini 2020-05-12 09:14:44 UTC
I am closing this as DUPLICATE so we can have a reference to the original BZ where this got fixed. The problem reported on the DUPLICATE BZ is not the same but the "refactor" to solve it(https://github.com/openshift/cluster-image-registry-operator/pull/504) solved this one as well

*** This bug has been marked as a duplicate of bug 1816656 ***


Note You need to log in before you can comment on or make changes to this bug.