Bug 1664421 - openshift-service-cert-signer-operator logs have ERROR messages with a bad insert
Summary: openshift-service-cert-signer-operator logs have ERROR messages with a bad in...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: apiserver-auth
Version: 4.1.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.1.0
Assignee: Matt Rogers
QA Contact: Chuan Yu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-08 17:35 UTC by Mike Fiedler
Modified: 2019-06-04 10:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:41:49 UTC

Description Mike Fiedler 2019-01-08 17:35:56 UTC
Description of problem:

The openshift-service-cert-signer-operator pod logs have repeated ERROR messages with a bad insert.  It looks like it is supposed to be a key/value pair, but shows up as:

ERROR: logging before flag.Parse: E0108 15:32:54.425433       1 controller.go:118] {🐼 🐼} failed with: Operation cannot be fulfilled on servicecertsigneroperatorconfigs.servicecertsigner.config.openshift.io "instance": the object has been modified; please apply your changes to the latest version and try again

The panda face emoji is U+1F43C.

Not sure if the ERROR message itself is an issue.  The cluster is functional.


Version-Release number of selected component (if applicable):
# oc version
oc v4.0.0-0.123.0
kubernetes v1.11.0+4d56dbaf21



How reproducible: Always


Steps to Reproduce:
1. oc logs -f <openshift-service-cert-signer-operator pod>

Comment 1 Mike Fiedler 2019-01-08 17:45:22 UTC
@enj Not sure I have the right bugzilla component for this.  Please change if needed.

Comment 2 Neelesh Agrawal 2019-01-09 14:18:53 UTC
I believe this is right component. Panda gave it away for me.

Comment 6 Chuan Yu 2019-03-14 04:05:21 UTC
Not sure if the same issue with this bug, still many error report:

$ oc logs -f openshift-service-ca-operator-5877867cb6-qk6tn -n openshift-service-ca-operator

...
E0314 04:02:39.240741       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
I0314 04:02:39.278722       1 leaderelection.go:245] successfully renewed lease openshift-service-ca-operator/openshift-service-ca-operator-lock
E0314 04:02:39.279122       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
E0314 04:02:40.228103       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
E0314 04:02:41.224105       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
E0314 04:02:41.259329       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
I0314 04:02:41.290063       1 leaderelection.go:245] successfully renewed lease openshift-service-ca-operator/openshift-service-ca-operator-lock
E0314 04:02:41.290805       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
E0314 04:02:42.248086       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
E0314 04:02:43.237027       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
E0314 04:02:43.269083       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
I0314 04:02:43.301187       1 leaderelection.go:245] successfully renewed lease openshift-service-ca-operator/openshift-service-ca-operator-lock
E0314 04:02:43.301911       1 resourcesync_controller.go:237] key failed with : the server could not find the requested resource (put servicecas.operator.openshift.io cluster)
...

Comment 7 Mo 2019-04-05 13:50:00 UTC
(In reply to Chuan Yu from comment #6)
> Not sure if the same issue with this bug, still many error report:
> 
> $ oc logs -f openshift-service-ca-operator-5877867cb6-qk6tn -n
> openshift-service-ca-operator
> 
> ...
> E0314 04:02:39.240741       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> I0314 04:02:39.278722       1 leaderelection.go:245] successfully renewed
> lease openshift-service-ca-operator/openshift-service-ca-operator-lock
> E0314 04:02:39.279122       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> E0314 04:02:40.228103       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> E0314 04:02:41.224105       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> E0314 04:02:41.259329       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> I0314 04:02:41.290063       1 leaderelection.go:245] successfully renewed
> lease openshift-service-ca-operator/openshift-service-ca-operator-lock
> E0314 04:02:41.290805       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> E0314 04:02:42.248086       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> E0314 04:02:43.237027       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> E0314 04:02:43.269083       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> I0314 04:02:43.301187       1 leaderelection.go:245] successfully renewed
> lease openshift-service-ca-operator/openshift-service-ca-operator-lock
> E0314 04:02:43.301911       1 resourcesync_controller.go:237] key failed
> with : the server could not find the requested resource (put
> servicecas.operator.openshift.io cluster)
> ...

The is unrelated, but feel free to confirm both that error and the glog noise is gone.

Comment 8 Chuan Yu 2019-04-10 07:24:29 UTC
Both of that error are gone.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.nightly-2019-04-05-165550   True        False         32m     Cluster version is 4.0.0-0.nightly-2019-04-05-165550

Comment 9 Chuan Yu 2019-04-10 07:25:07 UTC
Verified.

Comment 11 errata-xmlrpc 2019-06-04 10:41:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.