Description of problem: The openshift-service-cert-signer-operator pod logs have repeated ERROR messages with a bad insert. It looks like it is supposed to be a key/value pair, but shows up as: ERROR: logging before flag.Parse: E0108 15:32:54.425433 1 controller.go:118] {🐼 🐼} failed with: Operation cannot be fulfilled on servicecertsigneroperatorconfigs.servicecertsigner.config.openshift.io "instance": the object has been modified; please apply your changes to the latest version and try again The panda face emoji is U+1F43C. Not sure if the ERROR message itself is an issue. The cluster is functional. Version-Release number of selected component (if applicable): # oc version oc v4.0.0-0.123.0 kubernetes v1.11.0+4d56dbaf21 How reproducible: Always Steps to Reproduce: 1. oc logs -f <openshift-service-cert-signer-operator pod>
@enj Not sure I have the right bugzilla component for this. Please change if needed.
I believe this is right component. Panda gave it away for me.
Not sure if the same issue with this bug, still many error report: $ oc logs -f openshift-service-ca-operator-5877867cb6-qk6tn -n openshift-service-ca-operator ... E0314 04:02:39.240741 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) I0314 04:02:39.278722 1 leaderelection.go:245] successfully renewed lease openshift-service-ca-operator/openshift-service-ca-operator-lock E0314 04:02:39.279122 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) E0314 04:02:40.228103 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) E0314 04:02:41.224105 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) E0314 04:02:41.259329 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) I0314 04:02:41.290063 1 leaderelection.go:245] successfully renewed lease openshift-service-ca-operator/openshift-service-ca-operator-lock E0314 04:02:41.290805 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) E0314 04:02:42.248086 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) E0314 04:02:43.237027 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) E0314 04:02:43.269083 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) I0314 04:02:43.301187 1 leaderelection.go:245] successfully renewed lease openshift-service-ca-operator/openshift-service-ca-operator-lock E0314 04:02:43.301911 1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster) ...
(In reply to Chuan Yu from comment #6) > Not sure if the same issue with this bug, still many error report: > > $ oc logs -f openshift-service-ca-operator-5877867cb6-qk6tn -n > openshift-service-ca-operator > > ... > E0314 04:02:39.240741 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > I0314 04:02:39.278722 1 leaderelection.go:245] successfully renewed > lease openshift-service-ca-operator/openshift-service-ca-operator-lock > E0314 04:02:39.279122 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > E0314 04:02:40.228103 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > E0314 04:02:41.224105 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > E0314 04:02:41.259329 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > I0314 04:02:41.290063 1 leaderelection.go:245] successfully renewed > lease openshift-service-ca-operator/openshift-service-ca-operator-lock > E0314 04:02:41.290805 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > E0314 04:02:42.248086 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > E0314 04:02:43.237027 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > E0314 04:02:43.269083 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > I0314 04:02:43.301187 1 leaderelection.go:245] successfully renewed > lease openshift-service-ca-operator/openshift-service-ca-operator-lock > E0314 04:02:43.301911 1 resourcesync_controller.go:237] key failed > with : the server could not find the requested resource (put > servicecas.operator.openshift.io cluster) > ... The is unrelated, but feel free to confirm both that error and the glog noise is gone.
Both of that error are gone. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-04-05-165550 True False 32m Cluster version is 4.0.0-0.nightly-2019-04-05-165550
Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758