Hide Forgot
Description of problem: ContainerRuntimeConfigController will not resync, since I create the ctrconfig firstly, but label the machineconfigpool later. ➜ ~ oc get ContainerRuntimeConfig -o yaml apiVersion: v1 items: - apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: creationTimestamp: 2019-03-07T07:03:49Z generation: 1 name: infraimaged-crio resourceVersion: "132151" selfLink: /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/infraimaged-crio uid: 2883dd7f-40a7-11e9-85b4-0273ccbbc370 spec: containerRuntimeConfig: logLevel: debug pidsLimit: 2048 machineConfigPoolSelector: matchLabels: custom-crio: ose-pod-worker status: conditions: - lastTransitionTime: 2019-03-07T07:03:49Z message: 'Error: could not find any MachineConfigPool set for ContainerRuntimeConfig infraimaged-crio' status: "False" type: Failure observedGeneration: 1 kind: List metadata: resourceVersion: "" selfLink: "" ➜ ~ oc get machineconfigpool --show-labels NAME CONFIG UPDATED UPDATING DEGRADED LABELS master master-2e3a3ab32af04c45c3fe4443b4405f41 True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker worker-4807d53364016e4500379157044f1309 True False False custom-crio=ose-pod-worker Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-04-234414 How reproducible: Always Steps to Reproduce: 1. Create a containerruntimeconfig echo -n 'apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: infraimaged-crio spec: machineConfigPoolSelector: matchLabels: custom-crio: ose-pod-worker containerRuntimeConfig: pidsLimit: 2048 logLevel: debug'| oc create -f - 2. Then label the target machineconfigpool oc label machineconfigpool worker custom-crio=ose-pod-worker 3. Check if new machine-config is generated and check the ctrconfig status 3. Actual results: No machine-config is generated, and ctrconfig still in error status Expected results: ctrconfig should be synced Additional info:
Urvashi any update on this bug?
@dwalsh yeah, I know the cause working on a fix.
Fixed in https://github.com/openshift/machine-config-operator/pull/556
Fix was merged into machine-config-operator/master
Checked with 4.0.0-0.nightly-2019-03-22-002648 and this issue is fixed. [root@preserved-bind-and-bastion ~]# echo 'apiVersion: machineconfiguration.openshift.io/v1 > kind: ContainerRuntimeConfig > metadata: > name: pid2kb-crio > spec: > machineConfigPoolSelector: > matchLabels: > custom-crio: pid2kb-worker > containerRuntimeConfig: > pidsLimit: 2048' | oc create -f - containerruntimeconfig.machineconfiguration.openshift.io/pid2kb-crio created [root@preserved-bind-and-bastion ~]# oc get ctrcfg -o yaml apiVersion: v1 items: - apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: creationTimestamp: 2019-03-22T08:07:42Z generation: 1 name: pid2kb-crio resourceVersion: "17696" selfLink: /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/pid2kb-crio uid: 9169dc3b-4c79-11e9-8bec-064261b9ef4a spec: containerRuntimeConfig: pidsLimit: 2048 machineConfigPoolSelector: matchLabels: custom-crio: pid2kb-worker status: conditions: - lastTransitionTime: 2019-03-22T08:07:42Z message: 'Error: could not find any MachineConfigPool set for ContainerRuntimeConfig pid2kb-crio' status: "False" type: Failure observedGeneration: 1 kind: List metadata: resourceVersion: "" selfLink: "" [root@preserved-bind-and-bastion ~]# oc label machineconfigpool worker custom-crio=pid2kb-worker machineconfigpool.machineconfiguration.openshift.io/worker labeled [root@preserved-bind-and-bastion ~]# oc get ctrcfg -o yaml apiVersion: v1 items: - apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: creationTimestamp: 2019-03-22T08:07:42Z finalizers: - 99-worker-cdc72481-4c77-11e9-9f14-0664e97786d0-containerruntime generation: 1 name: pid2kb-crio resourceVersion: "18470" selfLink: /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/pid2kb-crio uid: 9169dc3b-4c79-11e9-8bec-064261b9ef4a spec: containerRuntimeConfig: pidsLimit: 2048 machineConfigPoolSelector: matchLabels: custom-crio: pid2kb-worker status: conditions: - lastTransitionTime: 2019-03-22T08:09:04Z message: Success status: "True" type: Success observedGeneration: 1 kind: List metadata: resourceVersion: "" selfLink: "" [root@preserved-bind-and-bastion ~]# oc get machineconfig NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 4.0.22-201903211106-dirty 2.2.0 13m 00-master-ssh 4.0.22-201903211106-dirty 2.2.0 13m 00-worker 4.0.22-201903211106-dirty 2.2.0 13m 00-worker-ssh 4.0.22-201903211106-dirty 2.2.0 13m 01-master-container-runtime 4.0.22-201903211106-dirty 2.2.0 13m 01-master-kubelet 4.0.22-201903211106-dirty 2.2.0 13m 01-worker-container-runtime 4.0.22-201903211106-dirty 2.2.0 13m 01-worker-kubelet 4.0.22-201903211106-dirty 2.2.0 13m 99-master-cdc56b09-4c77-11e9-9f14-0664e97786d0-registries 4.0.22-201903211106-dirty 2.2.0 8m13s 99-worker-cdc72481-4c77-11e9-9f14-0664e97786d0-containerruntime 4.0.22-201903211106-dirty 2.2.0 14s 99-worker-cdc72481-4c77-11e9-9f14-0664e97786d0-registries 4.0.22-201903211106-dirty 2.2.0 8m13s master-d751b726eb85382206392f752d5f829d 4.0.22-201903211106-dirty 2.2.0 13m worker-14eb7110e02d8ee4be9b1a2f95594805 4.0.22-201903211106-dirty 2.2.0 13m worker-520da5a564c8741975a484dc74357847 4.0.22-201903211106-dirty 2.2.0 11s [root@preserved-bind-and-bastion ~]# oc get machineconfig NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 4.0.22-201903211106-dirty 2.2.0 24m 00-master-ssh 4.0.22-201903211106-dirty 2.2.0 24m 00-worker 4.0.22-201903211106-dirty 2.2.0 24m 00-worker-ssh 4.0.22-201903211106-dirty 2.2.0 24m 01-master-container-runtime 4.0.22-201903211106-dirty 2.2.0 24m 01-master-kubelet 4.0.22-201903211106-dirty 2.2.0 24m 01-worker-container-runtime 4.0.22-201903211106-dirty 2.2.0 24m 01-worker-kubelet 4.0.22-201903211106-dirty 2.2.0 24m 99-master-cdc56b09-4c77-11e9-9f14-0664e97786d0-registries 4.0.22-201903211106-dirty 2.2.0 18m 99-worker-cdc72481-4c77-11e9-9f14-0664e97786d0-containerruntime 4.0.22-201903211106-dirty 2.2.0 10m 99-worker-cdc72481-4c77-11e9-9f14-0664e97786d0-registries 4.0.22-201903211106-dirty 2.2.0 18m master-d751b726eb85382206392f752d5f829d 4.0.22-201903211106-dirty 2.2.0 24m worker-14eb7110e02d8ee4be9b1a2f95594805 4.0.22-201903211106-dirty 2.2.0 24m worker-520da5a564c8741975a484dc74357847 4.0.22-201903211106-dirty 2.2.0 10m [root@preserved-bind-and-bastion ~]# oc describe node -l node-role.kubernetes.io/worker=|grep -i machineconfig machineconfiguration.openshift.io/currentConfig: worker-520da5a564c8741975a484dc74357847 machineconfiguration.openshift.io/desiredConfig: worker-520da5a564c8741975a484dc74357847 machineconfiguration.openshift.io/state: Done machineconfiguration.openshift.io/currentConfig: worker-520da5a564c8741975a484dc74357847 machineconfiguration.openshift.io/desiredConfig: worker-520da5a564c8741975a484dc74357847 machineconfiguration.openshift.io/state: Done
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758