Bug 1816178 - unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": no matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Summary: unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.4
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.5.0
Assignee: Antonio Murdaca
QA Contact: Michael Nguyen
URL:
Whiteboard:
: 1821912 1828965 1848910 (view as bug list)
Depends On:
Blocks: 1771572
TreeView+ depends on / blocked
 
Reported: 2020-03-23 13:54 UTC by Alexander Chuzhoy
Modified: 2023-10-06 19:28 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-04 18:06:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1693 0 None closed Bug 1816178: MCO: have the CVO create CRDs 2021-01-13 08:24:44 UTC
Red Hat Knowledge Base (Solution) 5207731 0 None None None 2020-07-07 00:58:14 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-08-04 18:06:26 UTC

Description Alexander Chuzhoy 2020-03-23 13:54:11 UTC
Version: 4.4.0-0.nightly-2020-03-20-094134


During deployment I see the following message repeating
continuously on the bootstrap vm when I run: 'journalctl -b -f -u
bootkube.service':

Mar 22 01:26:55 localhost bootkube.sh[17541]:
"99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST
mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": no
matches for kind "MachineConfig" in version
"machineconfiguration.openshift.io/v1"


The deployment completes successfully despite these message.

Comment 1 Stephen Benjamin 2020-03-26 17:44:30 UTC
These errors are normal, the bootstrap node attempts to apply manifests continually until they succeed. This particular error is because the machine-config-operator hasn't created it's CRD's, and it goes away once it comes up.

Comment 2 Stephen Benjamin 2020-03-26 17:46:49 UTC
Actually after discussing we think it might be helpful if the bootkube logs either surpressed these messages, or provided an indication to the user that these are expected until the operator is available. We get reports constantly about these messages from end users of OCP.

Comment 3 Abhinav Dahiya 2020-03-26 18:17:17 UTC
> Actually after discussing we think it might be helpful if the bootkube logs either surpressed these messages

the bootkube service is not going to do any such suppressing, MCO should render the CRD on bootstrap host like other operators if we want to remove these messages.

> or provided an indication to the user that these are expected until the operator is available. We get reports constantly about these messages from end users of OCP.

that information is not available to bootkube (cluster-bootstrap) script and I don't think it should be available to it. It's job is to push manifests to cluster and that it. it doesn't need to know which operator provides these resources.

Moving to MCO to decide if they want to add the CRD in early stage, if not this bug should be closed as wontfix.

Comment 4 Antonio Murdaca 2020-04-08 09:05:39 UTC
(In reply to Abhinav Dahiya from comment #3)
> > Actually after discussing we think it might be helpful if the bootkube logs either surpressed these messages
> 
> the bootkube service is not going to do any such suppressing, MCO should
> render the CRD on bootstrap host like other operators if we want to remove
> these messages.
> 
> > or provided an indication to the user that these are expected until the operator is available. We get reports constantly about these messages from end users of OCP.
> 
> that information is not available to bootkube (cluster-bootstrap) script and
> I don't think it should be available to it. It's job is to push manifests to
> cluster and that it. it doesn't need to know which operator provides these
> resources.
> 
> Moving to MCO to decide if they want to add the CRD in early stage, if not
> this bug should be closed as wontfix.

which is something scheduled for either 4.5 or 4.6 so, lowering priority accordingly and moving to 4.5 as target

Comment 5 Scott Dodson 2020-04-08 13:39:34 UTC
*** Bug 1821912 has been marked as a duplicate of this bug. ***

Comment 6 Scott Dodson 2020-04-08 14:13:26 UTC
This certainly seems to be happening quite frequently in 4.4 though potentially as a result of other unrelated problems? See duped bug.

Comment 7 Antonio Murdaca 2020-04-28 21:36:26 UTC
*** Bug 1828965 has been marked as a duplicate of this bug. ***

Comment 8 rlopez 2020-04-29 01:14:04 UTC
Thanks Antonio for including in the discussion. I'm with Stephen Benjamin on this one. If not willing to suppress messages, then if the CRD can happen earlier on so that it doesn't flood bootkube.service that be ideal. In essence, these messages make reading the messages within the cmd "journalctl -f -u bootkube.service" very difficult.

Comment 11 Michael Nguyen 2020-04-30 18:00:00 UTC
Verified on 4.5.0-0.nightly-2020-04-30-112808.  There are only a few lines of this early on in the install now while the CRD is being created compared to before.

Comment 19 Beth White 2020-06-23 16:13:53 UTC
*** Bug 1848910 has been marked as a duplicate of this bug. ***

Comment 23 errata-xmlrpc 2020-08-04 18:06:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5 image release advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409

Comment 24 Yu Qi Zhang 2020-09-11 21:04:13 UTC
*** Bug 1848910 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.