Bug 1714769 - Multiple type: Failure conditions on KubeletConfig CR status when no MCP selector is provided
Summary: Multiple type: Failure conditions on KubeletConfig CR status when no MCP sele...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.2.0
Assignee: Ryan Phillips
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-28 19:05 UTC by Seth Jennings
Modified: 2019-10-16 06:29 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:29:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:29:35 UTC

Description Seth Jennings 2019-05-28 19:05:11 UTC
While trying to see what the behavior of a KubeletConfig was when no MCP selector is provided, I found that we are adding a bunch of `type: Failure` conditions to the CR status

$ oc get kubeletconfig set-max-pods -oyaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  creationTimestamp: "2019-05-28T18:55:30Z"
  generation: 1
  name: set-max-pods
  resourceVersion: "56186"
  selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/set-max-pods
  uid: 2a43598d-817a-11e9-bd4d-fa163ed67f55
spec:
  kubeletConfig:
    maxPods: 100
  machineConfigPoolSelector:
    matchLabels: {}
status:
  conditions:
  - lastTransitionTime: "2019-05-28T18:55:30Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:30Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:30Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:30Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:30Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:30Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:31Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:31Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:32Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:33Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:35Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:41Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:55:51Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:56:11Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure
  - lastTransitionTime: "2019-05-28T18:56:52Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig set-max-pods'
    status: "False"
    type: Failure

There should only be one condition of `type: Failure` that updates with subsequent failures.

Comment 2 Ryan Phillips 2019-06-03 16:47:20 UTC
PR Merged

Comment 4 Sunil Choudhary 2019-06-25 10:22:55 UTC
Verified on 4.2.0-0.nightly-2019-06-25-003324

$ oc version --short
Client Version: v4.1.0-201905191700+7bd2e5b-dirty
Server Version: v1.14.0+952fea3

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-06-25-003324   True        False         4h49m   Cluster version is 4.2.0-0.nightly-2019-06-25-003324

$ oc get kubeletconfig custom-kubelet -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  creationTimestamp: "2019-06-25T10:12:31Z"
  generation: 1
  name: custom-kubelet
  resourceVersion: "79401"
  selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/custom-kubelet
  uid: be40c257-9731-11e9-a22f-06afe3c5b920
spec:
  kubeletConfig:
    imageGCHighThresholdPercent: 80
    imageGCLowThresholdPercent: 75
    imageMinimumGCAge: 5m
    maxPods: 240
    podsPerCore: 12
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: custom-kubelet
status:
  conditions:
  - lastTransitionTime: "2019-06-25T10:12:31Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig'
    status: "False"
    type: Failure

Comment 6 errata-xmlrpc 2019-10-16 06:29:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.