+++ This bug was initially created as a clone of Bug #1886900 +++ Since bug 1873900's [1], we've had lots of log noise like: I1009 15:40:43.074056 1 sync_worker.go:702] Manifest: {"apiVersion... This line is descended from logging we grew way back in [2]. But we triggered it recently with the bump to v5 in [1]. We should drop the line, because it's noisy spew in the log files, and we can get the manifest content via: $ oc adm release extract --to=manifests $PULLSPEC [1]: https://github.com/openshift/cluster-version-operator/pull/448 [2]: https://github.com/openshift/cluster-version-operator/pull/14
Trevor tells me this is a regression in 4.6 and it yields a nearly 8x increase in log rate so moving back to 4.6.0.
Reproduced on 4.6.0-rc.1 # ./oc logs cluster-version-operator-79d45f4784-wsvlm > rc1-cvo.log # cat rc1-cvo.log |grep "sync_worker.go:702"|wc -l 3189 # cat rc1-cvo.log |grep "sync_worker.go:702"|tail -n2 I1010 06:53:18.130079 1 sync_worker.go:702] Manifest: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"exclude.release.openshift.io/internal-openshift-hosted":"true","include.release.openshift.io/self-managed-high-availability":"true"},"labels":{"app":"openshift-config-operator"},"name":"openshift-config-operator","namespace":"openshift-config-operator"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"openshift-config-operator"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"openshift-config-operator"},"name":"openshift-config-operator"},"spec":{"containers":[{"command":["cluster-config-operator","operator"],"env":[{"name":"IMAGE","value":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25197b2709c0691c28424c9b07e505a71d13bf481e18bc42636cc84ee8fef033"},{"name":"OPERATOR_IMAGE_VERSION","value":"4.6.0-rc.1"},{"name":"OPERAND_IMAGE_VERSION","value":"4.6.0-rc.1"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25197b2709c0691c28424c9b07e505a71d13bf481e18bc42636cc84ee8fef033","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":3,"periodSeconds":3},"name":"openshift-config-operator","ports":[{"containerPort":8443,"name":"metrics","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":3,"periodSeconds":3},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"nodeSelector":{"node-role.kubernetes.io/master":""},"serviceAccountName":"openshift-config-operator","tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":120},{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":120}],"volumes":[{"name":"serving-cert","secret":{"optional":true,"secretName":"config-operator-serving-cert"}}]}}}} I1010 06:53:18.278386 1 sync_worker.go:702] Manifest: {"apiVersion":"config.openshift.io/v1","kind":"ClusterOperator","metadata":{"annotations":{"exclude.release.openshift.io/internal-openshift-hosted":"true","include.release.openshift.io/self-managed-high-availability":"true"},"name":"config-operator"},"spec":{},"status":{"versions":[{"name":"operator","version":"4.6.0-rc.1"}]}}
Verified on 4.6.0-0.nightly-2020-10-10-041109 # ./oc logs cluster-version-operator-6bc6c897cb-w8xkf > nightly-cvo.log # cat nightly-cvo.log |grep "sync_worker.go:702"|wc -l 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196