Bug 1691649
Summary: | Warning like unable to sync appears in registry pod log after changes to registry config | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Wenjing Zheng <wzheng> |
Component: | Image Registry | Assignee: | Ricardo Maraschini <rmarasch> |
Status: | CLOSED DUPLICATE | QA Contact: | Wenjing Zheng <wzheng> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.3.0 | CC: | adam.kaplan, aos-bugs, apaladug, jcrumple, openshift-bugs-escalate |
Target Milestone: | --- | ||
Target Release: | 4.5.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-05-08 08:57:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Wenjing Zheng
2019-03-22 07:34:16 UTC
This error appears because the operator gets its changes from watchers, and it gets its own changes with a delay. We can just suppress logging of this error. Hi Oleg Another customer is seeing this issue after updating the config. The case is 02619097. The change was done to fix a typo they made for the swift storage and change seems to have taken effect despite the error. Is the error harmless? Would upgrading the cluster to the next version possibly make the error go away? Also, I noticed that the target release for this BZ is now pushed to 4.4.0. Is this going to be fixed in 4.4.0? Thanks Anand Yes, the error is harmless (unless you see it constantly repeated every few minutes). The operator should observe the latest version of the cluster object on the next iteration and continue functioning in a normal state. Thanks, Oleg, the customer is seeing it at random intervals. In once instance, we saw it happening after 2 seconds and in another after a few minutes. What is "next iteration" ? the next time operator syncs the config? Yes, and the operator sync the configs every ~10 minutes or if one of dependent resources is changed. Thanks, Oleg. The customer did the config change 2 days ago and they just confirmed that they are still seeing the error message at very short intervals like every few seconds. Any way to work around to mitigate the issue ? We are not aware of such problems. Please collect the version of OCP, the operator logs and `oc get config.imageregistry/cluster -o yaml`. Cluster version is 4.3.0 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.3.0 True False 74d Cluster version is 4.3.0 Logs and config are attached to the case 02619097. I am closing this as DUPLICATE so we can have a reference to the original BZ where this got fixed. The problem reported on the DUPLICATE BZ is not the same but the "refactor" to solve it(https://github.com/openshift/cluster-image-registry-operator/pull/504) solved this one as well *** This bug has been marked as a duplicate of bug 1816656 *** |