Description of problem: The MCO operator log, sometime around Dec. 21st and in 4.10 only, starting from PR https://github.com/openshift/machine-config-operator/pull/2833, spams a warning message e.g. W0124 22:13:09.332948 1 recorder.go:205] Error creating event &Event{ObjectMeta:{.16cd5489abe402fe 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:,Namespace:,Name:,UID:,APIVersion:,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingUpdated,Message:Updated ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server -n openshift-machine-config-operator because it changed,Source:EventSource{Component:machine-config-operator,Host:,},FirstTimestamp:2022-01-24 22:13:09.331198718 +0000 UTC m=+890.272064564,LastTimestamp:2022-01-24 22:13:09.331198718 +0000 UTC m=+890.272064564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Event ".16cd5489abe402fe" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace This is not a blocking issue but is good to fix cosmetically, as well as to not fill up log storage. See example log: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/2927/pull-ci-openshift-machine-config-operator-master-e2e-aws/1485726952353435648/artifacts/e2e-aws/gather-extra/artifacts/pods/openshift-machine-config-operator_machine-config-operator-8454b9478b-b6qk7_machine-config-operator.log vs a 4.9 log: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/2928/pull-ci-openshift-machine-config-operator-release-4.9-e2e-aws/1485671156978552832/artifacts/e2e-aws/gather-extra/artifacts/pods/openshift-machine-config-operator_machine-config-operator-7986544b5-5fg96_machine-config-operator.log Version-Release number of MCO (Machine Config Operator) (if applicable): 4.10 Platform (AWS, VSphere, Metal, etc.): all Are you certain that the root cause of the issue being reported is the MCO (Machine Config Operator)? (Y/N/Not sure): Y How reproducible: 100% Steps to Reproduce: 1. Any CI job 2. 3. Actual results: Expected results: Additional info: 1. Please consider attaching a must-gather archive (via oc adm must-gather). Please review must-gather contents for sensitive information before attaching any must-gathers to a Bugzilla report. You may also mark the bug private if you wish. 2. If a must-gather is unavailable, please provide the output of: $ oc get co machine-config -o yaml $ oc get mcp (and oc describe mcp/${degraded_pool} if pools are degraded) $ oc get mc $ oc get pod -n openshift-machine-config-operator $ oc get node -o wide 3. If a node is not accessible via API, please provide console/journal/kubelet logs of the problematic node 4. Are there RHEL nodes on the cluster? If yes, please upload the whole Ansible logs or Jenkins job
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056