Bug 1918440

Summary: Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig
Product: OpenShift Container Platform Reporter: Sinny Kumari <skumari>
Component: Machine Config OperatorAssignee: Sinny Kumari <skumari>
Status: CLOSED ERRATA QA Contact: Michael Nguyen <mnguyen>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.7   
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 15:55:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1989403    

Description Sinny Kumari 2021-01-20 17:49:58 UTC
In certain condition, kargs gets reapplied even when there is no changes in kargs.

Steps to Reproduce:
1.  Apply a MachineConfig adding karg.

Example, MachineConfig:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: worker-kargs
spec:
  config:
    ignition:
      version: 3.0.0
  kernelArguments:
    - "bar"
2. Once MachineConfig is successfully applied, create a new MachineConfig like adding an extension:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: master-extension1
spec:
  config:
    ignition:
      version: 3.0.0
  extensions:
    - usbguard

3. Check the machine-config-daemon log, it will un-necessary reapply kargs while we didn't added any.

Comment 2 Michael Nguyen 2021-01-22 15:43:23 UTC
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2021-01-22-104107   True        False         81m     Cluster version is 4.7.0-0.nightly-2021-01-22-104107

$ cat << EOF > karg.yaml
> apiVersion: machineconfiguration.openshift.io/v1
> kind: MachineConfig
> metadata:
>   labels:
>     machineconfiguration.openshift.io/role: worker
>   name: worker-kargs
> spec:
>   config:
>     ignition:
>       version: 3.0.0
>   kernelArguments:
>     - "bar"
> EOF

$ oc create -f karg.yaml 
machineconfig.machineconfiguration.openshift.io/worker-kargs created

$ oc get mc
NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
00-master                                          4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
00-worker                                          4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
01-master-container-runtime                        4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
01-master-kubelet                                  4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
01-worker-container-runtime                        4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
01-worker-kubelet                                  4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
99-master-generated-registries                     4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
99-master-ssh                                                                                 3.1.0             54m
99-worker-generated-registries                     4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
99-worker-ssh                                                                                 3.1.0             54m
rendered-master-e2e1a48726d3c0d40b0fc4f232a0fb8c   4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
rendered-worker-fa380cb3de6dc7ad55ff61075439843f   4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             47m
worker-kargs                                                                                  3.0.0             2s

$ oc get mcp/worker
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
worker   rendered-worker-fa380cb3de6dc7ad55ff61075439843f   False     True       False      3              0                   0                     0                      48m

$ watch oc get mcp/worker

$ oc get nodes
NAME                                       STATUS   ROLES    AGE   VERSION
ci-ln-54y0bd2-f76d1-lffmr-master-0         Ready    master   69m   v1.20.0+d9c52cc
ci-ln-54y0bd2-f76d1-lffmr-master-1         Ready    master   69m   v1.20.0+d9c52cc
ci-ln-54y0bd2-f76d1-lffmr-master-2         Ready    master   69m   v1.20.0+d9c52cc
ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq   Ready    worker   59m   v1.20.0+d9c52cc
ci-ln-54y0bd2-f76d1-lffmr-worker-c-kft75   Ready    worker   59m   v1.20.0+d9c52cc
ci-ln-54y0bd2-f76d1-lffmr-worker-d-mscc4   Ready    worker   62m   v1.20.0+d9c52cc

$ oc debug node/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
Starting pod/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# cat /proc/cmdline 
BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-39f4649a787b9c2f81dae7a8b2ec312d5d47c9548f7cf26a77a6a5e1ab72fc3c/vmlinuz-4.18.0-240.10.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.1/rhcos/39f4649a787b9c2f81dae7a8b2ec312d5d47c9548f7cf26a77a6a5e1ab72fc3c/0 ignition.platform.id=gcp root=UUID=c919e011-8f73-44ad-ba0c-652fb2ded11a rw rootflags=prjquota bar
sh-4.4# exit
exit
sh-4.2# exit
exit

Removing debug pod ...

$ oc get pods -A --field-selector spec.nodeName=ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
NAMESPACE                                NAME                           READY   STATUS    RESTARTS   AGE
openshift-cluster-csi-drivers            gcp-pd-csi-driver-node-kck4h   3/3     Running   0          60m
openshift-cluster-node-tuning-operator   tuned-ld95z                    1/1     Running   0          60m
openshift-dns                            dns-default-brh65              3/3     Running   0          60m
openshift-image-registry                 node-ca-252kx                  1/1     Running   0          60m
openshift-ingress-canary                 ingress-canary-2zrqm           1/1     Running   0          59m
openshift-machine-config-operator        machine-config-daemon-8dkgg    2/2     Running   0          60m
openshift-monitoring                     node-exporter-j7w8g            2/2     Running   0          60m
openshift-multus                         multus-fkbvw                   1/1     Running   0          60m
openshift-multus                         network-metrics-daemon-7c47b   2/2     Running   0          60m
openshift-network-diagnostics            network-check-target-hfk5s     1/1     Running   0          60m
openshift-sdn                            ovs-gt8c2                      1/1     Running   0          60m
openshift-sdn                            sdn-9rvjl                      2/2     Running   0          60m

$ oc -n openshift-machine-config-operator logs machine-config-daemon-8dkgg -c machine-config-daemon
I0122 14:08:35.826183    1936 start.go:108] Version: v4.7.0-202101211944.p0-dirty (4be49c8e238eaba6d932acf51a97e071bac90af3)
I0122 14:08:35.910076    1936 start.go:121] Calling chroot("/rootfs")
I0122 14:08:35.910629    1936 rpm-ostree.go:261] Running captured: rpm-ostree status --json
I0122 14:08:36.323339    1936 daemon.go:224] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4 (47.83.202101171239-0)
I0122 14:08:36.431059    1936 daemon.go:231] Installed Ignition binary version: 2.9.0
I0122 14:08:36.504676    1936 start.go:97] Copied self to /run/bin/machine-config-daemon on host
I0122 14:08:36.515280    1936 update.go:1854] Starting to manage node: ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
I0122 14:08:36.516075    1936 metrics.go:105] Registering Prometheus metrics
I0122 14:08:36.518880    1936 metrics.go:110] Starting metrics listener on 127.0.0.1:8797
I0122 14:08:36.527065    1936 rpm-ostree.go:261] Running captured: rpm-ostree status
I0122 14:08:36.594495    1936 daemon.go:863] State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)

  ostree://8e87a86b9444784ab29e7917fa82e00d5e356f18b19449946b687ee8dc27c51a
                   Version: 47.83.202101161239-0 (2021-01-16T12:43:01Z)
I0122 14:08:36.594628    1936 rpm-ostree.go:261] Running captured: journalctl --list-boots
I0122 14:08:36.603451    1936 daemon.go:870] journalctl --list-boots:
-1 f9f6740b0163499cab78d8468569f9f8 Fri 2021-01-22 14:02:48 UTC—Fri 2021-01-22 14:07:33 UTC
 0 318cfe198cd54a85897d38aa9515de43 Fri 2021-01-22 14:07:49 UTC—Fri 2021-01-22 14:08:36 UTC
I0122 14:08:36.603579    1936 rpm-ostree.go:261] Running captured: systemctl list-units --state=failed --no-legend
I0122 14:08:36.615843    1936 daemon.go:885] systemd service state: OK
I0122 14:08:36.615877    1936 daemon.go:617] Starting MachineConfigDaemon
I0122 14:08:36.616050    1936 daemon.go:624] Enabling Kubelet Healthz Monitor
I0122 14:08:53.520695    1936 trace.go:205] Trace[646203300]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (22-Jan-2021 14:08:36.516) (total time: 17003ms):
Trace[646203300]: [17.003700262s] [17.003700262s] END
I0122 14:08:53.520722    1936 trace.go:205] Trace[1106410694]: "Reflector ListAndWatch" name:github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101 (22-Jan-2021 14:08:36.518) (total time: 17002ms):
Trace[1106410694]: [17.002569687s] [17.002569687s] END
E0122 14:08:53.520747    1936 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
E0122 14:08:53.520751    1936 reflector.go:138] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
I0122 14:08:55.982759    1936 daemon.go:401] Node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq is not labeled node-role.kubernetes.io/master
I0122 14:08:55.983184    1936 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq: map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci/zones/us-east1-b/instances/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq"} machine.openshift.io/machine:openshift-machine-api/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json
I0122 14:08:55.984477    1936 node.go:45] Setting initial node config: rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.015412    1936 daemon.go:781] In bootstrap mode
I0122 14:08:56.015452    1936 daemon.go:809] Current+desired config: rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.022470    1936 daemon.go:1061] No bootstrap pivot required; unlinking bootstrap node annotations
I0122 14:08:56.022568    1936 daemon.go:1099] Validating against pending config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.038446    1936 daemon.go:1110] Validated on-disk state
I0122 14:08:56.082165    1936 daemon.go:1165] Completing pending config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.082205    1936 update.go:1854] completed update for config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.087009    1936 daemon.go:1181] In desired config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:54:13.901273    1936 update.go:598] Checking Reconcilable for config rendered-worker-fa380cb3de6dc7ad55ff61075439843f to rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:54:13.946443    1936 update.go:1854] Starting update from rendered-worker-fa380cb3de6dc7ad55ff61075439843f to rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85: &{osUpdate:false kargs:true fips:false passwd:false files:false units:false kernelType:false extensions:false}
I0122 14:54:13.973966    1936 update.go:1854] Update prepared; beginning drain
E0122 14:54:14.454278    1936 daemon.go:342] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-kck4h, openshift-cluster-node-tuning-operator/tuned-ld95z, openshift-dns/dns-default-brh65, openshift-image-registry/node-ca-252kx, openshift-ingress-canary/ingress-canary-2zrqm, openshift-machine-config-operator/machine-config-daemon-8dkgg, openshift-monitoring/node-exporter-j7w8g, openshift-multus/multus-fkbvw, openshift-multus/network-metrics-daemon-7c47b, openshift-network-diagnostics/network-check-target-hfk5s, openshift-sdn/ovs-gt8c2, openshift-sdn/sdn-9rvjl; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: openshift-marketplace/certified-operators-cths5, openshift-marketplace/community-operators-95289, openshift-marketplace/redhat-marketplace-wcpg2, openshift-marketplace/redhat-operators-qxscf
I0122 14:54:14.461348    1936 daemon.go:342] evicting pod openshift-monitoring/thanos-querier-f7fb47c8c-2bwb4
I0122 14:54:14.461383    1936 daemon.go:342] evicting pod openshift-image-registry/image-registry-b965fdb7c-z8r7f
I0122 14:54:14.461398    1936 daemon.go:342] evicting pod openshift-marketplace/community-operators-95289
I0122 14:54:14.461386    1936 daemon.go:342] evicting pod openshift-monitoring/alertmanager-main-1
I0122 14:54:14.461438    1936 daemon.go:342] evicting pod openshift-marketplace/redhat-marketplace-wcpg2
I0122 14:54:14.461449    1936 daemon.go:342] evicting pod openshift-ingress/router-default-57747bb9b-9s2s7
I0122 14:54:14.461470    1936 daemon.go:342] evicting pod openshift-monitoring/prometheus-adapter-778f695847-zq2pd
I0122 14:54:14.461474    1936 daemon.go:342] evicting pod openshift-marketplace/redhat-operators-qxscf
I0122 14:54:14.461545    1936 daemon.go:342] evicting pod openshift-monitoring/alertmanager-main-2
I0122 14:54:14.461555    1936 daemon.go:342] evicting pod openshift-monitoring/telemeter-client-7448c96b67-644ts
I0122 14:54:14.461573    1936 daemon.go:342] evicting pod openshift-monitoring/prometheus-k8s-0
I0122 14:54:14.461351    1936 daemon.go:342] evicting pod openshift-marketplace/certified-operators-cths5
I0122 14:54:16.685050    1936 request.go:655] Throttling request took 1.085730314s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-image-registry/pods/image-registry-b965fdb7c-z8r7f
I0122 14:54:22.899267    1936 daemon.go:328] Evicted pod openshift-marketplace/redhat-operators-qxscf
I0122 14:54:23.089462    1936 daemon.go:328] Evicted pod openshift-marketplace/certified-operators-cths5
I0122 14:54:23.291640    1936 daemon.go:328] Evicted pod openshift-monitoring/prometheus-k8s-0
I0122 14:54:23.490207    1936 daemon.go:328] Evicted pod openshift-monitoring/alertmanager-main-2
I0122 14:54:23.689923    1936 daemon.go:328] Evicted pod openshift-marketplace/redhat-marketplace-wcpg2
I0122 14:54:24.089281    1936 daemon.go:328] Evicted pod openshift-monitoring/telemeter-client-7448c96b67-644ts
I0122 14:54:24.690413    1936 daemon.go:328] Evicted pod openshift-marketplace/community-operators-95289
I0122 14:54:25.090409    1936 daemon.go:328] Evicted pod openshift-monitoring/prometheus-adapter-778f695847-zq2pd
I0122 14:54:33.112425    1936 daemon.go:328] Evicted pod openshift-monitoring/alertmanager-main-1
I0122 14:54:33.603561    1936 daemon.go:328] Evicted pod openshift-image-registry/image-registry-b965fdb7c-z8r7f
I0122 14:54:33.605285    1936 daemon.go:328] Evicted pod openshift-monitoring/thanos-querier-f7fb47c8c-2bwb4
I0122 14:55:31.715552    1936 daemon.go:328] Evicted pod openshift-ingress/router-default-57747bb9b-9s2s7
I0122 14:55:31.715656    1936 update.go:1854] drain complete
I0122 14:55:31.721368    1936 update.go:237] Successful drain took 77.742286577 seconds
I0122 14:55:31.721411    1936 update.go:1217] Updating files
I0122 14:55:31.735428    1936 update.go:1566] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt"
I0122 14:55:31.745258    1936 update.go:1566] Writing file "/etc/tmpfiles.d/cleanup-cni.conf"
I0122 14:55:31.750130    1936 update.go:1566] Writing file "/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem"
I0122 14:55:31.755183    1936 update.go:1566] Writing file "/usr/local/bin/configure-ovs.sh"
I0122 14:55:31.761489    1936 update.go:1566] Writing file "/etc/containers/storage.conf"
I0122 14:55:31.766776    1936 update.go:1566] Writing file "/etc/NetworkManager/conf.d/hostname.conf"
I0122 14:55:31.771912    1936 update.go:1566] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf"
I0122 14:55:31.777463    1936 update.go:1566] Writing file "/etc/modules-load.d/iptables.conf"
I0122 14:55:31.781894    1936 update.go:1566] Writing file "/etc/kubernetes/kubelet-ca.crt"
I0122 14:55:31.787362    1936 update.go:1566] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf"
I0122 14:55:31.792631    1936 update.go:1566] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf"
I0122 14:55:31.798503    1936 update.go:1566] Writing file "/etc/NetworkManager/conf.d/sdn.conf"
I0122 14:55:31.803606    1936 update.go:1566] Writing file "/var/lib/kubelet/config.json"
I0122 14:55:31.810273    1936 update.go:1566] Writing file "/etc/kubernetes/ca.crt"
I0122 14:55:31.815903    1936 update.go:1566] Writing file "/etc/ssh/sshd_config.d/10-disable-ssh-key-dir.conf"
I0122 14:55:31.821200    1936 update.go:1566] Writing file "/etc/sysctl.d/forward.conf"
I0122 14:55:31.826144    1936 update.go:1566] Writing file "/etc/sysctl.d/inotify.conf"
I0122 14:55:31.831208    1936 update.go:1566] Writing file "/usr/local/sbin/set-valid-hostname.sh"
I0122 14:55:31.837065    1936 update.go:1566] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy"
I0122 14:55:31.841617    1936 update.go:1566] Writing file "/etc/containers/registries.conf"
I0122 14:55:31.846760    1936 update.go:1566] Writing file "/etc/crio/crio.conf.d/00-default"
I0122 14:55:31.853086    1936 update.go:1566] Writing file "/etc/containers/policy.json"
I0122 14:55:31.859090    1936 update.go:1566] Writing file "/etc/kubernetes/cloud.conf"
I0122 14:55:31.864836    1936 update.go:1566] Writing file "/etc/kubernetes/kubelet.conf"
I0122 14:55:31.872457    1936 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 14:55:32.267877    1936 update.go:1461] Preset systemd unit crio.service
I0122 14:55:32.267911    1936 update.go:1472] Writing systemd unit dropin "mco-disabled.conf"
I0122 14:55:32.282343    1936 update.go:1544] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist.
)
I0122 14:55:32.282384    1936 update.go:1507] Writing systemd unit "gcp-hostname.service"
I0122 14:55:32.286689    1936 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 14:55:32.289899    1936 update.go:1507] Writing systemd unit "kubelet.service"
I0122 14:55:32.292787    1936 update.go:1507] Writing systemd unit "machine-config-daemon-firstboot.service"
I0122 14:55:32.295922    1936 update.go:1507] Writing systemd unit "machine-config-daemon-pull.service"
I0122 14:55:32.298643    1936 update.go:1507] Writing systemd unit "node-valid-hostname.service"
I0122 14:55:32.301369    1936 update.go:1507] Writing systemd unit "nodeip-configuration.service"
I0122 14:55:32.304064    1936 update.go:1507] Writing systemd unit "ovs-configuration.service"
I0122 14:55:32.306608    1936 update.go:1472] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf"
I0122 14:55:32.662524    1936 update.go:1461] Preset systemd unit ovs-vswitchd.service
I0122 14:55:32.662561    1936 update.go:1472] Writing systemd unit dropin "10-ovsdb-restart.conf"
I0122 14:55:32.665860    1936 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 14:55:32.678607    1936 update.go:1544] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist.
)
I0122 14:55:32.678638    1936 update.go:1472] Writing systemd unit dropin "mco-disabled.conf"
I0122 14:55:32.691595    1936 update.go:1544] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist.
)
I0122 14:55:33.027548    1936 update.go:1439] Enabled systemd units: [gcp-hostname.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service openvswitch.service ovs-configuration.service ovsdb-server.service]
I0122 14:55:33.371505    1936 update.go:1450] Disabled systemd units [nodeip-configuration.service]
I0122 14:55:33.371549    1936 update.go:1290] Deleting stale data
I0122 14:55:33.389789    1936 update.go:1685] Writing SSHKeys at "/home/core/.ssh/authorized_keys"
I0122 14:55:33.413061    1936 update.go:1854] Running rpm-ostree [kargs --append=bar]
I0122 14:55:33.417405    1936 rpm-ostree.go:261] Running captured: rpm-ostree kargs --append=bar
I0122 14:55:40.670246    1936 update.go:1854] Rebooting node
I0122 14:55:40.674370    1936 update.go:1854] initiating reboot: Node will reboot into config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:55:49.706385    1936 daemon.go:642] Shutting down MachineConfigDaemon
I0122 14:56:56.872473    2178 start.go:108] Version: v4.7.0-202101211944.p0-dirty (4be49c8e238eaba6d932acf51a97e071bac90af3)
I0122 14:56:56.887456    2178 start.go:121] Calling chroot("/rootfs")
I0122 14:56:56.887772    2178 rpm-ostree.go:261] Running captured: rpm-ostree status --json
I0122 14:56:57.550614    2178 daemon.go:224] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4 (47.83.202101171239-0)
I0122 14:56:57.730974    2178 daemon.go:231] Installed Ignition binary version: 2.9.0
I0122 14:56:57.814211    2178 start.go:97] Copied self to /run/bin/machine-config-daemon on host
I0122 14:56:57.818966    2178 metrics.go:105] Registering Prometheus metrics
I0122 14:56:57.819075    2178 metrics.go:110] Starting metrics listener on 127.0.0.1:8797
I0122 14:56:57.820766    2178 update.go:1854] Starting to manage node: ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
I0122 14:56:57.828556    2178 rpm-ostree.go:261] Running captured: rpm-ostree status
I0122 14:56:57.893172    2178 daemon.go:863] State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)

  pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)
I0122 14:56:57.893227    2178 rpm-ostree.go:261] Running captured: journalctl --list-boots
I0122 14:56:57.906317    2178 daemon.go:870] journalctl --list-boots:
-2 f9f6740b0163499cab78d8468569f9f8 Fri 2021-01-22 14:02:48 UTC—Fri 2021-01-22 14:07:33 UTC
-1 318cfe198cd54a85897d38aa9515de43 Fri 2021-01-22 14:07:49 UTC—Fri 2021-01-22 14:55:49 UTC
 0 369d853b93ba421ab14f959f5e0e8e6e Fri 2021-01-22 14:56:06 UTC—Fri 2021-01-22 14:56:57 UTC
I0122 14:56:57.906458    2178 rpm-ostree.go:261] Running captured: systemctl list-units --state=failed --no-legend
I0122 14:56:57.920522    2178 daemon.go:885] systemd service state: OK
I0122 14:56:57.920553    2178 daemon.go:617] Starting MachineConfigDaemon
I0122 14:56:57.920751    2178 daemon.go:624] Enabling Kubelet Healthz Monitor
E0122 14:57:00.814176    2178 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
E0122 14:57:00.814291    2178 reflector.go:138] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
I0122 14:57:02.875600    2178 daemon.go:401] Node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq is not labeled node-role.kubernetes.io/master
I0122 14:57:02.888123    2178 daemon.go:816] Current config: rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:57:02.888160    2178 daemon.go:817] Desired config: rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.901587    2178 update.go:1854] Disk currentConfig rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85 overrides node's currentConfig annotation rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:57:02.909461    2178 daemon.go:1099] Validating against pending config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.934731    2178 daemon.go:1110] Validated on-disk state
I0122 14:57:02.960527    2178 daemon.go:1165] Completing pending config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.980343    2178 update.go:1854] completed update for config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.987069    2178 daemon.go:1181] In desired config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85

$ cat << EOF > ext-no-karg.yaml
> apiVersion: machineconfiguration.openshift.io/v1
> kind: MachineConfig
> metadata:
>   labels:
>     machineconfiguration.openshift.io/role: worker
>   name: master-extension1
> spec:
>   config:
>     ignition:
>       version: 3.0.0
>   extensions:
>     - usbguard
> EOF

$ oc create -f ext-no-karg.yaml 
machineconfig.machineconfiguration.openshift.io/master-extension1 created

$ oc get mc
NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
00-master                                          4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
00-worker                                          4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
01-master-container-runtime                        4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
01-master-kubelet                                  4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
01-worker-container-runtime                        4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
01-worker-kubelet                                  4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
99-master-generated-registries                     4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
99-master-ssh                                                                                 3.1.0             83m
99-worker-generated-registries                     4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
99-worker-ssh                                                                                 3.1.0             83m
master-extension1                                                                             3.0.0             3s
rendered-master-e2e1a48726d3c0d40b0fc4f232a0fb8c   4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85   4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             29m
rendered-worker-fa380cb3de6dc7ad55ff61075439843f   4be49c8e238eaba6d932acf51a97e071bac90af3   3.2.0             76m
worker-kargs                                                                                  3.0.0             29m

$ oc get mcp/worker
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
worker   rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85   False     True       False      3              0                   0                     0                      77m

$ watch oc get mcp/worker

$ oc get pods -A --field-selector spec.nodeName=ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
NAMESPACE                                NAME                           READY   STATUS    RESTARTS   AGE
openshift-cluster-csi-drivers            gcp-pd-csi-driver-node-kck4h   3/3     Running   0          87m
openshift-cluster-node-tuning-operator   tuned-ld95z                    1/1     Running   0          87m
openshift-dns                            dns-default-brh65              3/3     Running   0          87m
openshift-image-registry                 node-ca-252kx                  1/1     Running   0          87m
openshift-ingress-canary                 ingress-canary-2zrqm           1/1     Running   0          87m
openshift-machine-config-operator        machine-config-daemon-8dkgg    2/2     Running   0          87m
openshift-monitoring                     node-exporter-j7w8g            2/2     Running   0          87m
openshift-multus                         multus-fkbvw                   1/1     Running   0          87m
openshift-multus                         network-metrics-daemon-7c47b   2/2     Running   0          87m
openshift-network-diagnostics            network-check-target-hfk5s     1/1     Running   0          87m
openshift-sdn                            ovs-gt8c2                      1/1     Running   0          87m
openshift-sdn                            sdn-9rvjl                      2/2     Running   0          87m

$ oc debug node/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
Starting pod/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# cat /proc/cmdline 
BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-39f4649a787b9c2f81dae7a8b2ec312d5d47c9548f7cf26a77a6a5e1ab72fc3c/vmlinuz-4.18.0-240.10.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.0/rhcos/39f4649a787b9c2f81dae7a8b2ec312d5d47c9548f7cf26a77a6a5e1ab72fc3c/0 ignition.platform.id=gcp root=UUID=c919e011-8f73-44ad-ba0c-652fb2ded11a rw rootflags=prjquota bar
sh-4.4# exit
exit
sh-4.2# exit
exit

Removing debug pod ...

$ oc -n openshift-machine-config-operator logs machine-config-daemon-8dkgg -c machine-config-daemon
I0122 14:08:35.826183    1936 start.go:108] Version: v4.7.0-202101211944.p0-dirty (4be49c8e238eaba6d932acf51a97e071bac90af3)
I0122 14:08:35.910076    1936 start.go:121] Calling chroot("/rootfs")
I0122 14:08:35.910629    1936 rpm-ostree.go:261] Running captured: rpm-ostree status --json
I0122 14:08:36.323339    1936 daemon.go:224] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4 (47.83.202101171239-0)
I0122 14:08:36.431059    1936 daemon.go:231] Installed Ignition binary version: 2.9.0
I0122 14:08:36.504676    1936 start.go:97] Copied self to /run/bin/machine-config-daemon on host
I0122 14:08:36.515280    1936 update.go:1854] Starting to manage node: ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
I0122 14:08:36.516075    1936 metrics.go:105] Registering Prometheus metrics
I0122 14:08:36.518880    1936 metrics.go:110] Starting metrics listener on 127.0.0.1:8797
I0122 14:08:36.527065    1936 rpm-ostree.go:261] Running captured: rpm-ostree status
I0122 14:08:36.594495    1936 daemon.go:863] State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)

  ostree://8e87a86b9444784ab29e7917fa82e00d5e356f18b19449946b687ee8dc27c51a
                   Version: 47.83.202101161239-0 (2021-01-16T12:43:01Z)
I0122 14:08:36.594628    1936 rpm-ostree.go:261] Running captured: journalctl --list-boots
I0122 14:08:36.603451    1936 daemon.go:870] journalctl --list-boots:
-1 f9f6740b0163499cab78d8468569f9f8 Fri 2021-01-22 14:02:48 UTC—Fri 2021-01-22 14:07:33 UTC
 0 318cfe198cd54a85897d38aa9515de43 Fri 2021-01-22 14:07:49 UTC—Fri 2021-01-22 14:08:36 UTC
I0122 14:08:36.603579    1936 rpm-ostree.go:261] Running captured: systemctl list-units --state=failed --no-legend
I0122 14:08:36.615843    1936 daemon.go:885] systemd service state: OK
I0122 14:08:36.615877    1936 daemon.go:617] Starting MachineConfigDaemon
I0122 14:08:36.616050    1936 daemon.go:624] Enabling Kubelet Healthz Monitor
I0122 14:08:53.520695    1936 trace.go:205] Trace[646203300]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (22-Jan-2021 14:08:36.516) (total time: 17003ms):
Trace[646203300]: [17.003700262s] [17.003700262s] END
I0122 14:08:53.520722    1936 trace.go:205] Trace[1106410694]: "Reflector ListAndWatch" name:github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101 (22-Jan-2021 14:08:36.518) (total time: 17002ms):
Trace[1106410694]: [17.002569687s] [17.002569687s] END
E0122 14:08:53.520747    1936 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
E0122 14:08:53.520751    1936 reflector.go:138] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
I0122 14:08:55.982759    1936 daemon.go:401] Node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq is not labeled node-role.kubernetes.io/master
I0122 14:08:55.983184    1936 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq: map[csi.volume.kubernetes.io/nodeid:{"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci/zones/us-east1-b/instances/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq"} machine.openshift.io/machine:openshift-machine-api/ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json
I0122 14:08:55.984477    1936 node.go:45] Setting initial node config: rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.015412    1936 daemon.go:781] In bootstrap mode
I0122 14:08:56.015452    1936 daemon.go:809] Current+desired config: rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.022470    1936 daemon.go:1061] No bootstrap pivot required; unlinking bootstrap node annotations
I0122 14:08:56.022568    1936 daemon.go:1099] Validating against pending config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.038446    1936 daemon.go:1110] Validated on-disk state
I0122 14:08:56.082165    1936 daemon.go:1165] Completing pending config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.082205    1936 update.go:1854] completed update for config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:08:56.087009    1936 daemon.go:1181] In desired config rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:54:13.901273    1936 update.go:598] Checking Reconcilable for config rendered-worker-fa380cb3de6dc7ad55ff61075439843f to rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:54:13.946443    1936 update.go:1854] Starting update from rendered-worker-fa380cb3de6dc7ad55ff61075439843f to rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85: &{osUpdate:false kargs:true fips:false passwd:false files:false units:false kernelType:false extensions:false}
I0122 14:54:13.973966    1936 update.go:1854] Update prepared; beginning drain
E0122 14:54:14.454278    1936 daemon.go:342] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-kck4h, openshift-cluster-node-tuning-operator/tuned-ld95z, openshift-dns/dns-default-brh65, openshift-image-registry/node-ca-252kx, openshift-ingress-canary/ingress-canary-2zrqm, openshift-machine-config-operator/machine-config-daemon-8dkgg, openshift-monitoring/node-exporter-j7w8g, openshift-multus/multus-fkbvw, openshift-multus/network-metrics-daemon-7c47b, openshift-network-diagnostics/network-check-target-hfk5s, openshift-sdn/ovs-gt8c2, openshift-sdn/sdn-9rvjl; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: openshift-marketplace/certified-operators-cths5, openshift-marketplace/community-operators-95289, openshift-marketplace/redhat-marketplace-wcpg2, openshift-marketplace/redhat-operators-qxscf
I0122 14:54:14.461348    1936 daemon.go:342] evicting pod openshift-monitoring/thanos-querier-f7fb47c8c-2bwb4
I0122 14:54:14.461383    1936 daemon.go:342] evicting pod openshift-image-registry/image-registry-b965fdb7c-z8r7f
I0122 14:54:14.461398    1936 daemon.go:342] evicting pod openshift-marketplace/community-operators-95289
I0122 14:54:14.461386    1936 daemon.go:342] evicting pod openshift-monitoring/alertmanager-main-1
I0122 14:54:14.461438    1936 daemon.go:342] evicting pod openshift-marketplace/redhat-marketplace-wcpg2
I0122 14:54:14.461449    1936 daemon.go:342] evicting pod openshift-ingress/router-default-57747bb9b-9s2s7
I0122 14:54:14.461470    1936 daemon.go:342] evicting pod openshift-monitoring/prometheus-adapter-778f695847-zq2pd
I0122 14:54:14.461474    1936 daemon.go:342] evicting pod openshift-marketplace/redhat-operators-qxscf
I0122 14:54:14.461545    1936 daemon.go:342] evicting pod openshift-monitoring/alertmanager-main-2
I0122 14:54:14.461555    1936 daemon.go:342] evicting pod openshift-monitoring/telemeter-client-7448c96b67-644ts
I0122 14:54:14.461573    1936 daemon.go:342] evicting pod openshift-monitoring/prometheus-k8s-0
I0122 14:54:14.461351    1936 daemon.go:342] evicting pod openshift-marketplace/certified-operators-cths5
I0122 14:54:16.685050    1936 request.go:655] Throttling request took 1.085730314s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-image-registry/pods/image-registry-b965fdb7c-z8r7f
I0122 14:54:22.899267    1936 daemon.go:328] Evicted pod openshift-marketplace/redhat-operators-qxscf
I0122 14:54:23.089462    1936 daemon.go:328] Evicted pod openshift-marketplace/certified-operators-cths5
I0122 14:54:23.291640    1936 daemon.go:328] Evicted pod openshift-monitoring/prometheus-k8s-0
I0122 14:54:23.490207    1936 daemon.go:328] Evicted pod openshift-monitoring/alertmanager-main-2
I0122 14:54:23.689923    1936 daemon.go:328] Evicted pod openshift-marketplace/redhat-marketplace-wcpg2
I0122 14:54:24.089281    1936 daemon.go:328] Evicted pod openshift-monitoring/telemeter-client-7448c96b67-644ts
I0122 14:54:24.690413    1936 daemon.go:328] Evicted pod openshift-marketplace/community-operators-95289
I0122 14:54:25.090409    1936 daemon.go:328] Evicted pod openshift-monitoring/prometheus-adapter-778f695847-zq2pd
I0122 14:54:33.112425    1936 daemon.go:328] Evicted pod openshift-monitoring/alertmanager-main-1
I0122 14:54:33.603561    1936 daemon.go:328] Evicted pod openshift-image-registry/image-registry-b965fdb7c-z8r7f
I0122 14:54:33.605285    1936 daemon.go:328] Evicted pod openshift-monitoring/thanos-querier-f7fb47c8c-2bwb4
I0122 14:55:31.715552    1936 daemon.go:328] Evicted pod openshift-ingress/router-default-57747bb9b-9s2s7
I0122 14:55:31.715656    1936 update.go:1854] drain complete
I0122 14:55:31.721368    1936 update.go:237] Successful drain took 77.742286577 seconds
I0122 14:55:31.721411    1936 update.go:1217] Updating files
I0122 14:55:31.735428    1936 update.go:1566] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt"
I0122 14:55:31.745258    1936 update.go:1566] Writing file "/etc/tmpfiles.d/cleanup-cni.conf"
I0122 14:55:31.750130    1936 update.go:1566] Writing file "/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem"
I0122 14:55:31.755183    1936 update.go:1566] Writing file "/usr/local/bin/configure-ovs.sh"
I0122 14:55:31.761489    1936 update.go:1566] Writing file "/etc/containers/storage.conf"
I0122 14:55:31.766776    1936 update.go:1566] Writing file "/etc/NetworkManager/conf.d/hostname.conf"
I0122 14:55:31.771912    1936 update.go:1566] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf"
I0122 14:55:31.777463    1936 update.go:1566] Writing file "/etc/modules-load.d/iptables.conf"
I0122 14:55:31.781894    1936 update.go:1566] Writing file "/etc/kubernetes/kubelet-ca.crt"
I0122 14:55:31.787362    1936 update.go:1566] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf"
I0122 14:55:31.792631    1936 update.go:1566] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf"
I0122 14:55:31.798503    1936 update.go:1566] Writing file "/etc/NetworkManager/conf.d/sdn.conf"
I0122 14:55:31.803606    1936 update.go:1566] Writing file "/var/lib/kubelet/config.json"
I0122 14:55:31.810273    1936 update.go:1566] Writing file "/etc/kubernetes/ca.crt"
I0122 14:55:31.815903    1936 update.go:1566] Writing file "/etc/ssh/sshd_config.d/10-disable-ssh-key-dir.conf"
I0122 14:55:31.821200    1936 update.go:1566] Writing file "/etc/sysctl.d/forward.conf"
I0122 14:55:31.826144    1936 update.go:1566] Writing file "/etc/sysctl.d/inotify.conf"
I0122 14:55:31.831208    1936 update.go:1566] Writing file "/usr/local/sbin/set-valid-hostname.sh"
I0122 14:55:31.837065    1936 update.go:1566] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy"
I0122 14:55:31.841617    1936 update.go:1566] Writing file "/etc/containers/registries.conf"
I0122 14:55:31.846760    1936 update.go:1566] Writing file "/etc/crio/crio.conf.d/00-default"
I0122 14:55:31.853086    1936 update.go:1566] Writing file "/etc/containers/policy.json"
I0122 14:55:31.859090    1936 update.go:1566] Writing file "/etc/kubernetes/cloud.conf"
I0122 14:55:31.864836    1936 update.go:1566] Writing file "/etc/kubernetes/kubelet.conf"
I0122 14:55:31.872457    1936 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 14:55:32.267877    1936 update.go:1461] Preset systemd unit crio.service
I0122 14:55:32.267911    1936 update.go:1472] Writing systemd unit dropin "mco-disabled.conf"
I0122 14:55:32.282343    1936 update.go:1544] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist.
)
I0122 14:55:32.282384    1936 update.go:1507] Writing systemd unit "gcp-hostname.service"
I0122 14:55:32.286689    1936 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 14:55:32.289899    1936 update.go:1507] Writing systemd unit "kubelet.service"
I0122 14:55:32.292787    1936 update.go:1507] Writing systemd unit "machine-config-daemon-firstboot.service"
I0122 14:55:32.295922    1936 update.go:1507] Writing systemd unit "machine-config-daemon-pull.service"
I0122 14:55:32.298643    1936 update.go:1507] Writing systemd unit "node-valid-hostname.service"
I0122 14:55:32.301369    1936 update.go:1507] Writing systemd unit "nodeip-configuration.service"
I0122 14:55:32.304064    1936 update.go:1507] Writing systemd unit "ovs-configuration.service"
I0122 14:55:32.306608    1936 update.go:1472] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf"
I0122 14:55:32.662524    1936 update.go:1461] Preset systemd unit ovs-vswitchd.service
I0122 14:55:32.662561    1936 update.go:1472] Writing systemd unit dropin "10-ovsdb-restart.conf"
I0122 14:55:32.665860    1936 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 14:55:32.678607    1936 update.go:1544] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist.
)
I0122 14:55:32.678638    1936 update.go:1472] Writing systemd unit dropin "mco-disabled.conf"
I0122 14:55:32.691595    1936 update.go:1544] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist.
)
I0122 14:55:33.027548    1936 update.go:1439] Enabled systemd units: [gcp-hostname.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service openvswitch.service ovs-configuration.service ovsdb-server.service]
I0122 14:55:33.371505    1936 update.go:1450] Disabled systemd units [nodeip-configuration.service]
I0122 14:55:33.371549    1936 update.go:1290] Deleting stale data
I0122 14:55:33.389789    1936 update.go:1685] Writing SSHKeys at "/home/core/.ssh/authorized_keys"
I0122 14:55:33.413061    1936 update.go:1854] Running rpm-ostree [kargs --append=bar]
I0122 14:55:33.417405    1936 rpm-ostree.go:261] Running captured: rpm-ostree kargs --append=bar
I0122 14:55:40.670246    1936 update.go:1854] Rebooting node
I0122 14:55:40.674370    1936 update.go:1854] initiating reboot: Node will reboot into config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:55:49.706385    1936 daemon.go:642] Shutting down MachineConfigDaemon
I0122 14:56:56.872473    2178 start.go:108] Version: v4.7.0-202101211944.p0-dirty (4be49c8e238eaba6d932acf51a97e071bac90af3)
I0122 14:56:56.887456    2178 start.go:121] Calling chroot("/rootfs")
I0122 14:56:56.887772    2178 rpm-ostree.go:261] Running captured: rpm-ostree status --json
I0122 14:56:57.550614    2178 daemon.go:224] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4 (47.83.202101171239-0)
I0122 14:56:57.730974    2178 daemon.go:231] Installed Ignition binary version: 2.9.0
I0122 14:56:57.814211    2178 start.go:97] Copied self to /run/bin/machine-config-daemon on host
I0122 14:56:57.818966    2178 metrics.go:105] Registering Prometheus metrics
I0122 14:56:57.819075    2178 metrics.go:110] Starting metrics listener on 127.0.0.1:8797
I0122 14:56:57.820766    2178 update.go:1854] Starting to manage node: ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
I0122 14:56:57.828556    2178 rpm-ostree.go:261] Running captured: rpm-ostree status
I0122 14:56:57.893172    2178 daemon.go:863] State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)

  pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)
I0122 14:56:57.893227    2178 rpm-ostree.go:261] Running captured: journalctl --list-boots
I0122 14:56:57.906317    2178 daemon.go:870] journalctl --list-boots:
-2 f9f6740b0163499cab78d8468569f9f8 Fri 2021-01-22 14:02:48 UTC—Fri 2021-01-22 14:07:33 UTC
-1 318cfe198cd54a85897d38aa9515de43 Fri 2021-01-22 14:07:49 UTC—Fri 2021-01-22 14:55:49 UTC
 0 369d853b93ba421ab14f959f5e0e8e6e Fri 2021-01-22 14:56:06 UTC—Fri 2021-01-22 14:56:57 UTC
I0122 14:56:57.906458    2178 rpm-ostree.go:261] Running captured: systemctl list-units --state=failed --no-legend
I0122 14:56:57.920522    2178 daemon.go:885] systemd service state: OK
I0122 14:56:57.920553    2178 daemon.go:617] Starting MachineConfigDaemon
I0122 14:56:57.920751    2178 daemon.go:624] Enabling Kubelet Healthz Monitor
E0122 14:57:00.814176    2178 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
E0122 14:57:00.814291    2178 reflector.go:138] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
I0122 14:57:02.875600    2178 daemon.go:401] Node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq is not labeled node-role.kubernetes.io/master
I0122 14:57:02.888123    2178 daemon.go:816] Current config: rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:57:02.888160    2178 daemon.go:817] Desired config: rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.901587    2178 update.go:1854] Disk currentConfig rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85 overrides node's currentConfig annotation rendered-worker-fa380cb3de6dc7ad55ff61075439843f
I0122 14:57:02.909461    2178 daemon.go:1099] Validating against pending config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.934731    2178 daemon.go:1110] Validated on-disk state
I0122 14:57:02.960527    2178 daemon.go:1165] Completing pending config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.980343    2178 update.go:1854] completed update for config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 14:57:02.987069    2178 daemon.go:1181] In desired config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 15:24:55.133154    2178 update.go:598] Checking Reconcilable for config rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85 to rendered-worker-c0757a3a402ea1842120d8a1ff5ee859
I0122 15:24:55.170528    2178 update.go:1854] Starting update from rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85 to rendered-worker-c0757a3a402ea1842120d8a1ff5ee859: &{osUpdate:false kargs:false fips:false passwd:false files:false units:false kernelType:false extensions:true}
I0122 15:24:55.192746    2178 update.go:1854] Update prepared; beginning drain
E0122 15:24:55.680031    2178 daemon.go:342] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-kck4h, openshift-cluster-node-tuning-operator/tuned-ld95z, openshift-dns/dns-default-brh65, openshift-image-registry/node-ca-252kx, openshift-ingress-canary/ingress-canary-2zrqm, openshift-machine-config-operator/machine-config-daemon-8dkgg, openshift-monitoring/node-exporter-j7w8g, openshift-multus/multus-fkbvw, openshift-multus/network-metrics-daemon-7c47b, openshift-network-diagnostics/network-check-target-hfk5s, openshift-sdn/ovs-gt8c2, openshift-sdn/sdn-9rvjl; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: openshift-marketplace/certified-operators-kjmbh, openshift-marketplace/community-operators-9d7w7, openshift-marketplace/redhat-marketplace-9bq9s, openshift-marketplace/redhat-operators-d2h2m
I0122 15:24:55.688004    2178 daemon.go:342] evicting pod openshift-monitoring/thanos-querier-f7fb47c8c-7x8gh
I0122 15:24:55.688030    2178 daemon.go:342] evicting pod openshift-monitoring/prometheus-adapter-778f695847-dd945
I0122 15:24:55.688045    2178 daemon.go:342] evicting pod openshift-monitoring/prometheus-k8s-0
I0122 15:24:55.688050    2178 daemon.go:342] evicting pod openshift-ingress/router-default-57747bb9b-vrpsw
I0122 15:24:55.688033    2178 daemon.go:342] evicting pod openshift-marketplace/redhat-operators-d2h2m
I0122 15:24:55.688102    2178 daemon.go:342] evicting pod openshift-monitoring/alertmanager-main-1
I0122 15:24:55.688105    2178 daemon.go:342] evicting pod openshift-monitoring/alertmanager-main-2
I0122 15:24:55.688126    2178 daemon.go:342] evicting pod openshift-marketplace/community-operators-9d7w7
I0122 15:24:55.688126    2178 daemon.go:342] evicting pod openshift-monitoring/telemeter-client-7448c96b67-699bq
I0122 15:24:55.688146    2178 daemon.go:342] evicting pod openshift-marketplace/certified-operators-kjmbh
I0122 15:24:55.688148    2178 daemon.go:342] evicting pod openshift-marketplace/redhat-marketplace-9bq9s
I0122 15:24:55.688004    2178 daemon.go:342] evicting pod openshift-image-registry/image-registry-b965fdb7c-dnwh6
I0122 15:24:57.919870    2178 request.go:655] Throttling request took 1.129869427s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kjmbh
I0122 15:25:01.126231    2178 daemon.go:328] Evicted pod openshift-monitoring/prometheus-adapter-778f695847-dd945
I0122 15:25:01.523337    2178 daemon.go:328] Evicted pod openshift-monitoring/telemeter-client-7448c96b67-699bq
I0122 15:25:02.525091    2178 daemon.go:328] Evicted pod openshift-monitoring/alertmanager-main-1
I0122 15:25:02.723538    2178 daemon.go:328] Evicted pod openshift-marketplace/certified-operators-kjmbh
I0122 15:25:04.331157    2178 daemon.go:328] Evicted pod openshift-monitoring/prometheus-k8s-0
I0122 15:25:04.524105    2178 daemon.go:328] Evicted pod openshift-monitoring/thanos-querier-f7fb47c8c-7x8gh
I0122 15:25:04.724988    2178 daemon.go:328] Evicted pod openshift-marketplace/redhat-marketplace-9bq9s
I0122 15:25:04.923778    2178 daemon.go:328] Evicted pod openshift-marketplace/community-operators-9d7w7
I0122 15:25:05.327553    2178 daemon.go:328] Evicted pod openshift-monitoring/alertmanager-main-2
I0122 15:25:05.523065    2178 daemon.go:328] Evicted pod openshift-marketplace/redhat-operators-d2h2m
I0122 15:25:05.724903    2178 daemon.go:328] Evicted pod openshift-image-registry/image-registry-b965fdb7c-dnwh6
I0122 15:26:14.937914    2178 daemon.go:328] Evicted pod openshift-ingress/router-default-57747bb9b-vrpsw
I0122 15:26:14.937967    2178 update.go:1854] drain complete
I0122 15:26:14.942292    2178 update.go:237] Successful drain took 79.745043732 seconds
I0122 15:26:14.942342    2178 update.go:1217] Updating files
I0122 15:26:14.959860    2178 update.go:1566] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt"
I0122 15:26:14.962355    2178 update.go:1566] Writing file "/etc/tmpfiles.d/cleanup-cni.conf"
I0122 15:26:14.965666    2178 update.go:1566] Writing file "/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem"
I0122 15:26:14.967805    2178 update.go:1566] Writing file "/usr/local/bin/configure-ovs.sh"
I0122 15:26:14.971278    2178 update.go:1566] Writing file "/etc/containers/storage.conf"
I0122 15:26:14.974094    2178 update.go:1566] Writing file "/etc/NetworkManager/conf.d/hostname.conf"
I0122 15:26:14.976861    2178 update.go:1566] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf"
I0122 15:26:14.979166    2178 update.go:1566] Writing file "/etc/modules-load.d/iptables.conf"
I0122 15:26:14.982323    2178 update.go:1566] Writing file "/etc/kubernetes/kubelet-ca.crt"
I0122 15:26:14.985028    2178 update.go:1566] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf"
I0122 15:26:14.987670    2178 update.go:1566] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf"
I0122 15:26:14.991365    2178 update.go:1566] Writing file "/etc/NetworkManager/conf.d/sdn.conf"
I0122 15:26:14.994329    2178 update.go:1566] Writing file "/var/lib/kubelet/config.json"
I0122 15:26:14.997156    2178 update.go:1566] Writing file "/etc/kubernetes/ca.crt"
I0122 15:26:15.000465    2178 update.go:1566] Writing file "/etc/ssh/sshd_config.d/10-disable-ssh-key-dir.conf"
I0122 15:26:15.003431    2178 update.go:1566] Writing file "/etc/sysctl.d/forward.conf"
I0122 15:26:15.006530    2178 update.go:1566] Writing file "/etc/sysctl.d/inotify.conf"
I0122 15:26:15.009449    2178 update.go:1566] Writing file "/usr/local/sbin/set-valid-hostname.sh"
I0122 15:26:15.012599    2178 update.go:1566] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy"
I0122 15:26:15.014581    2178 update.go:1566] Writing file "/etc/containers/registries.conf"
I0122 15:26:15.017111    2178 update.go:1566] Writing file "/etc/crio/crio.conf.d/00-default"
I0122 15:26:15.020533    2178 update.go:1566] Writing file "/etc/containers/policy.json"
I0122 15:26:15.023125    2178 update.go:1566] Writing file "/etc/kubernetes/cloud.conf"
I0122 15:26:15.025677    2178 update.go:1566] Writing file "/etc/kubernetes/kubelet.conf"
I0122 15:26:15.028491    2178 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 15:26:15.385202    2178 update.go:1461] Preset systemd unit crio.service
I0122 15:26:15.385240    2178 update.go:1472] Writing systemd unit dropin "mco-disabled.conf"
I0122 15:26:15.398844    2178 update.go:1544] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist.
)
I0122 15:26:15.398880    2178 update.go:1507] Writing systemd unit "gcp-hostname.service"
I0122 15:26:15.402093    2178 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 15:26:15.404939    2178 update.go:1507] Writing systemd unit "kubelet.service"
I0122 15:26:15.407620    2178 update.go:1507] Writing systemd unit "machine-config-daemon-firstboot.service"
I0122 15:26:15.410095    2178 update.go:1507] Writing systemd unit "machine-config-daemon-pull.service"
I0122 15:26:15.412809    2178 update.go:1507] Writing systemd unit "node-valid-hostname.service"
I0122 15:26:15.415384    2178 update.go:1507] Writing systemd unit "nodeip-configuration.service"
I0122 15:26:15.417981    2178 update.go:1507] Writing systemd unit "ovs-configuration.service"
I0122 15:26:15.421322    2178 update.go:1472] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf"
I0122 15:26:15.792804    2178 update.go:1461] Preset systemd unit ovs-vswitchd.service
I0122 15:26:15.792837    2178 update.go:1472] Writing systemd unit dropin "10-ovsdb-restart.conf"
I0122 15:26:15.796006    2178 update.go:1472] Writing systemd unit dropin "10-mco-default-env.conf"
I0122 15:26:15.808799    2178 update.go:1544] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist.
)
I0122 15:26:15.808835    2178 update.go:1472] Writing systemd unit dropin "mco-disabled.conf"
I0122 15:26:15.821837    2178 update.go:1544] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist.
)
I0122 15:26:16.156384    2178 update.go:1439] Enabled systemd units: [gcp-hostname.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service openvswitch.service ovs-configuration.service ovsdb-server.service]
I0122 15:26:16.477167    2178 update.go:1450] Disabled systemd units [nodeip-configuration.service]
I0122 15:26:16.477208    2178 update.go:1290] Deleting stale data
I0122 15:26:16.491636    2178 update.go:1685] Writing SSHKeys at "/home/core/.ssh/authorized_keys"
I0122 15:26:16.517358    2178 run.go:18] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-737151572 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
I0122 15:26:32.775803    2178 update.go:1139] Applying extensions : ["update" "--install" "usbguard"]
I0122 15:26:32.775856    2178 rpm-ostree.go:261] Running captured: rpm-ostree update --install usbguard
I0122 15:26:47.631654    2178 update.go:1854] Rebooting node
I0122 15:26:47.635953    2178 update.go:1854] initiating reboot: Node will reboot into config rendered-worker-c0757a3a402ea1842120d8a1ff5ee859
I0122 15:26:56.299583    2178 daemon.go:642] Shutting down MachineConfigDaemon
I0122 15:28:00.324138    2173 start.go:108] Version: v4.7.0-202101211944.p0-dirty (4be49c8e238eaba6d932acf51a97e071bac90af3)
I0122 15:28:00.337322    2173 start.go:121] Calling chroot("/rootfs")
I0122 15:28:00.337831    2173 rpm-ostree.go:261] Running captured: rpm-ostree status --json
I0122 15:28:00.824647    2173 daemon.go:224] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4 (47.83.202101171239-0)
I0122 15:28:00.952187    2173 daemon.go:231] Installed Ignition binary version: 2.9.0
I0122 15:28:01.044033    2173 start.go:97] Copied self to /run/bin/machine-config-daemon on host
I0122 15:28:01.049271    2173 metrics.go:105] Registering Prometheus metrics
I0122 15:28:01.049585    2173 metrics.go:110] Starting metrics listener on 127.0.0.1:8797
I0122 15:28:01.052285    2173 update.go:1854] Starting to manage node: ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq
I0122 15:28:01.059559    2173 rpm-ostree.go:261] Running captured: rpm-ostree status
I0122 15:28:01.130479    2173 daemon.go:863] State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)
           LayeredPackages: usbguard

  pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8672a3ce64788e4138bec0d2cafe701d96b55c35482314d393b52dd414e635a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.83.202101171239-0 (2021-01-17T12:42:48Z)
I0122 15:28:01.130517    2173 rpm-ostree.go:261] Running captured: journalctl --list-boots
I0122 15:28:01.140240    2173 daemon.go:870] journalctl --list-boots:
-3 f9f6740b0163499cab78d8468569f9f8 Fri 2021-01-22 14:02:48 UTC—Fri 2021-01-22 14:07:33 UTC
-2 318cfe198cd54a85897d38aa9515de43 Fri 2021-01-22 14:07:49 UTC—Fri 2021-01-22 14:55:49 UTC
-1 369d853b93ba421ab14f959f5e0e8e6e Fri 2021-01-22 14:56:06 UTC—Fri 2021-01-22 15:26:56 UTC
 0 43642ce2b92c4d56a98a0c51544fcb07 Fri 2021-01-22 15:27:12 UTC—Fri 2021-01-22 15:28:01 UTC
I0122 15:28:01.140283    2173 rpm-ostree.go:261] Running captured: systemctl list-units --state=failed --no-legend
I0122 15:28:01.150829    2173 daemon.go:885] systemd service state: OK
I0122 15:28:01.150871    2173 daemon.go:617] Starting MachineConfigDaemon
I0122 15:28:01.151019    2173 daemon.go:624] Enabling Kubelet Healthz Monitor
E0122 15:28:04.736922    2173 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
E0122 15:28:04.737235    2173 reflector.go:138] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 172.30.0.1:443: connect: no route to host
I0122 15:28:06.797994    2173 daemon.go:401] Node ci-ln-54y0bd2-f76d1-lffmr-worker-b-lhmgq is not labeled node-role.kubernetes.io/master
I0122 15:28:06.810712    2173 daemon.go:816] Current config: rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 15:28:06.810748    2173 daemon.go:817] Desired config: rendered-worker-c0757a3a402ea1842120d8a1ff5ee859
I0122 15:28:06.821744    2173 update.go:1854] Disk currentConfig rendered-worker-c0757a3a402ea1842120d8a1ff5ee859 overrides node's currentConfig annotation rendered-worker-2bbd67a2dcf1de1b6e90d8bfe0f92a85
I0122 15:28:06.827599    2173 daemon.go:1099] Validating against pending config rendered-worker-c0757a3a402ea1842120d8a1ff5ee859
I0122 15:28:06.847162    2173 daemon.go:1110] Validated on-disk state
I0122 15:28:06.866960    2173 daemon.go:1165] Completing pending config rendered-worker-c0757a3a402ea1842120d8a1ff5ee859
I0122 15:28:06.891047    2173 update.go:1854] completed update for config rendered-worker-c0757a3a402ea1842120d8a1ff5ee859
I0122 15:28:06.895268    2173 daemon.go:1181] In desired config rendered-worker-c0757a3a402ea1842120d8a1ff5ee859

Comment 5 errata-xmlrpc 2021-02-24 15:55:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633