Bug 1848354 - daemon: confusing eviction messages in logs, formatting fix required
Summary: daemon: confusing eviction messages in logs, formatting fix required
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.5.0
Assignee: Kirsten Garrison
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On: 1848195
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-18 08:31 UTC by OpenShift BugZilla Robot
Modified: 2020-07-13 17:44 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:44:18 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1842 None closed [release-4.5] Bug 1848354: daemon: fix formating in eviction message 2020-06-23 21:53:35 UTC
Red Hat Product Errata RHBA-2020:2409 None None None 2020-07-13 17:44:40 UTC

Comment 4 Michael Nguyen 2020-06-24 21:27:14 UTC
Verified this on 4.5.0-0.nightly-2020-06-24-055959.  The updated log messages are formatted.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-06-24-055959   True        False         178m    Cluster version is 4.5.0-0.nightly-2020-06-24-055959
$ cat file.yaml 
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: test-file
spec:
  config:
    ignition:
      version: 2.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf;base64,c2VydmVyIGZvby5leGFtcGxlLm5ldCBtYXhkZWxheSAwLjQgb2ZmbGluZQpzZXJ2ZXIgYmFyLmV4YW1wbGUubmV0IG1heGRlbGF5IDAuNCBvZmZsaW5lCnNlcnZlciBiYXouZXhhbXBsZS5uZXQgbWF4ZGVsYXkgMC40IG9mZmxpbmUK
        filesystem: root
        mode: 0644
        path: /etc/test
$ oc apply -f file.yaml 
machineconfig.machineconfiguration.openshift.io/test-file created
$ oc get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-10-0-141-241.us-west-2.compute.internal   Ready    worker   3h3m    v1.18.3+c44581d
ip-10-0-157-169.us-west-2.compute.internal   Ready    master   3h14m   v1.18.3+c44581d
ip-10-0-167-5.us-west-2.compute.internal     Ready    worker   3h5m    v1.18.3+c44581d
ip-10-0-184-44.us-west-2.compute.internal    Ready    master   3h15m   v1.18.3+c44581d
ip-10-0-196-7.us-west-2.compute.internal     Ready    worker   3h4m    v1.18.3+c44581d
ip-10-0-223-13.us-west-2.compute.internal    Ready    master   3h14m   v1.18.3+c44581d
$ oc -n openshift-machine-config-operator --field-selector spec.nodeName=ip-10-0-196-7.us-west-2.compute.internal get pods
NAME                          READY   STATUS    RESTARTS   AGE
machine-config-daemon-plgxs   2/2     Running   0          3h5m
$ oc -n openshift-machine-config-operator logs -f machine-config-daemon-plgxs -c machine-config-daemon
I0624 18:16:48.221470    2149 start.go:74] Version: v4.5.0-202006231303.p0-dirty (9712a7c7918841129a65906fb372cd4313b1052f)
I0624 18:16:48.237210    2149 start.go:84] Calling chroot("/rootfs")
I0624 18:16:48.237409    2149 rpm-ostree.go:368] Running captured: rpm-ostree status --json
I0624 18:16:48.927707    2149 daemon.go:211] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42f829e861bb1a45771a78d78d58b98de3cefadbf8ef76b379fc624697360526 (45.82.202006231429-0)
I0624 18:16:48.937342    2149 update.go:1404] Starting to manage node: ip-10-0-196-7.us-west-2.compute.internal
I0624 18:16:48.937501    2149 metrics.go:106] Registering Prometheus metrics
I0624 18:16:48.937888    2149 metrics.go:111] Starting metrics listener on 127.0.0.1:8797
I0624 18:16:48.957640    2149 rpm-ostree.go:368] Running captured: rpm-ostree status
I0624 18:16:49.062152    2149 daemon.go:818] State: idle
AutomaticUpdates: disabled
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42f829e861bb1a45771a78d78d58b98de3cefadbf8ef76b379fc624697360526
              CustomOrigin: Managed by machine-config-operator
                   Version: 45.82.202006231429-0 (2020-06-23T14:33:34Z)

  ostree://0b79771d2a6f1a091fdcfbf53814e3eff5b190f06e55384bed2e0337e4df0347
                   Version: 45.82.202006230709-0 (2020-06-23T07:13:24Z)
I0624 18:16:49.062253    2149 rpm-ostree.go:368] Running captured: journalctl --list-boots
I0624 18:16:49.076968    2149 daemon.go:825] journalctl --list-boots:
-1 f8782ecfa6324ad2994b953a959ddfd1 Wed 2020-06-24 18:13:13 UTC—Wed 2020-06-24 18:15:12 UTC
 0 f9a735368e134ef585d2be344c5f3fb8 Wed 2020-06-24 18:15:30 UTC—Wed 2020-06-24 18:16:49 UTC
I0624 18:16:49.077042    2149 daemon.go:568] Starting MachineConfigDaemon
I0624 18:16:49.078419    2149 daemon.go:575] Enabling Kubelet Healthz Monitor
E0624 18:16:52.029369    2149 reflector.go:178] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host
E0624 18:16:52.029369    2149 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host
I0624 18:16:53.045965    2149 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ip-10-0-196-7.us-west-2.compute.internal: map[machine.openshift.io/machine:openshift-machine-api/mnguyen45nightly-85w42-worker-us-west-2c-g82h7 volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json
I0624 18:16:53.056064    2149 node.go:45] Setting initial node config: rendered-worker-9ea6fc805a3d59a3758c40dd6241decf
I0624 18:16:53.075396    2149 daemon.go:736] In bootstrap mode
I0624 18:16:53.075415    2149 daemon.go:764] Current+desired config: rendered-worker-9ea6fc805a3d59a3758c40dd6241decf
I0624 18:16:53.080767    2149 daemon.go:976] No bootstrap pivot required; unlinking bootstrap node annotations
I0624 18:16:53.080810    2149 daemon.go:1014] Validating against pending config rendered-worker-9ea6fc805a3d59a3758c40dd6241decf
I0624 18:16:53.469214    2149 daemon.go:1030] Validated on-disk state
I0624 18:16:53.519549    2149 daemon.go:1064] Completing pending config rendered-worker-9ea6fc805a3d59a3758c40dd6241decf
I0624 18:16:53.519571    2149 update.go:1404] completed update for config rendered-worker-9ea6fc805a3d59a3758c40dd6241decf
I0624 18:16:53.523273    2149 daemon.go:1080] In desired config rendered-worker-9ea6fc805a3d59a3758c40dd6241decf

I0624 21:24:20.846055    2149 update.go:284] Checking Reconcilable for config rendered-worker-9ea6fc805a3d59a3758c40dd6241decf to rendered-worker-aad0e85c1cac1a5f432f119dbf2f54be
I0624 21:24:21.435339    2149 update.go:1404] Starting update from rendered-worker-9ea6fc805a3d59a3758c40dd6241decf to rendered-worker-aad0e85c1cac1a5f432f119dbf2f54be: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false}
I0624 21:24:21.438371    2149 update.go:1404] Update prepared; beginning drain
E0624 21:24:21.542070    2149 daemon.go:321] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-dhks5, openshift-dns/dns-default-2ctnk, openshift-image-registry/node-ca-zhhhv, openshift-machine-config-operator/machine-config-daemon-plgxs, openshift-monitoring/node-exporter-5ttr2, openshift-multus/multus-8zmgq, openshift-sdn/ovs-nmt42, openshift-sdn/sdn-v9mqj
I0624 21:24:21.547719    2149 daemon.go:321] evicting pod openshift-image-registry/image-registry-86969ff4c5-tzdz9
I0624 21:24:21.548200    2149 daemon.go:321] evicting pod openshift-cluster-storage-operator/csi-snapshot-controller-99575994c-hrqjt
I0624 21:24:21.548398    2149 daemon.go:321] evicting pod openshift-monitoring/alertmanager-main-2
I0624 21:24:21.548534    2149 daemon.go:321] evicting pod openshift-ingress/router-default-749c49659d-sh5qf
I0624 21:24:21.548644    2149 daemon.go:321] evicting pod openshift-monitoring/alertmanager-main-0
I0624 21:24:21.548764    2149 daemon.go:321] evicting pod openshift-monitoring/prometheus-adapter-578b74c86d-vbmm4
I0624 21:24:21.548887    2149 daemon.go:321] evicting pod openshift-monitoring/grafana-f7b65b6d5-tpjw9
I0624 21:24:21.549003    2149 daemon.go:321] evicting pod openshift-monitoring/prometheus-k8s-0
I0624 21:24:21.549892    2149 daemon.go:321] evicting pod openshift-monitoring/thanos-querier-6999bbd777-9wjhq
I0624 21:24:24.676823    2149 request.go:621] Throttling request took 1.027710895s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-ingress/pods/router-default-749c49659d-sh5qf
I0624 21:24:31.449289    2149 daemon.go:307] Evicted pod openshift-monitoring/prometheus-k8s-0
I0624 21:24:31.644505    2149 daemon.go:307] Evicted pod openshift-monitoring/grafana-f7b65b6d5-tpjw9
I0624 21:24:31.844880    2149 daemon.go:307] Evicted pod openshift-monitoring/prometheus-adapter-578b74c86d-vbmm4
I0624 21:24:32.044471    2149 daemon.go:307] Evicted pod openshift-monitoring/thanos-querier-6999bbd777-9wjhq
I0624 21:24:32.245986    2149 daemon.go:307] Evicted pod openshift-monitoring/alertmanager-main-2
I0624 21:24:32.445665    2149 daemon.go:307] Evicted pod openshift-image-registry/image-registry-86969ff4c5-tzdz9
I0624 21:24:32.845774    2149 daemon.go:307] Evicted pod openshift-cluster-storage-operator/csi-snapshot-controller-99575994c-hrqjt

Comment 5 errata-xmlrpc 2020-07-13 17:44:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.