Bug 1848195

Summary: daemon: confusing eviction messages in logs, formatting fix required
Product: OpenShift Container Platform Reporter: Kirsten Garrison <kgarriso>
Component: Machine Config OperatorAssignee: Kirsten Garrison <kgarriso>
Status: CLOSED ERRATA QA Contact: Michael Nguyen <mnguyen>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.5   
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-27 16:08:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1848354    

Description Kirsten Garrison 2020-06-17 22:28:27 UTC
Our messages in the MCD logs changed and need to be reformatted as they are hard to read and unclear:

I0617 21:04:51.760803    6609 daemon.go:307] Evicted pod from Nodepodrevision-pruner-4-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-scheduler
I0617 21:04:53.159221    6609 daemon.go:307] Evicted pod from Nodepodrevision-pruner-6-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-scheduler
I0617 21:04:54.375075    6609 daemon.go:307] Evicted pod from Nodepodrevision-pruner-2-ip-10-0-194-55.us-east-2.compute.internal/openshift-etcd
I0617 21:04:55.368459    6609 daemon.go:307] Evicted pod from Nodepodrevision-pruner-3-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-apiserver
I0617 21:04:55.968332    6609 daemon.go:307] Evicted pod from Nodepodinstaller-2-ip-10-0-194-55.us-east-2.compute.internal/openshift-etcd
I0617 21:04:56.562366    6609 daemon.go:307] Evicted pod from Nodepodrevision-pruner-6-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-apiserver
I0617 21:04:57.164593    6609 daemon.go:307] Evicted pod from Nodepodinstaller-4-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-controller-manager
I0617 21:04:57.768920    6609 daemon.go:307] Evicted pod from Nodepodrevision-pruner-7-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-apiserver
I0617 21:04:58.229887    6609 daemon.go:307] Evicted pod from Nodepodinstaller-3-ip-10-0-194-55.us-east-2.compute.internal/openshift-kube-apiserver
I0617 21:04:58.755115    6609 daemon.go:307] Evicted pod from Nodepodcluster-version-operator-7d989cff59-slc7f/openshift-cluster-version

Comment 4 Michael Nguyen 2020-06-24 13:43:40 UTC
Verified on 4.6.0-0.nightly-2020-06-24-071932
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-06-24-071932   True        False         52m     Cluster version is 4.6.0-0.nightly-2020-06-24-071932

$ cat file.yaml 
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: test-file
spec:
  config:
    ignition:
      version: 2.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf;base64,c2VydmVyIGZvby5leGFtcGxlLm5ldCBtYXhkZWxheSAwLjQgb2ZmbGluZQpzZXJ2ZXIgYmFyLmV4YW1wbGUubmV0IG1heGRlbGF5IDAuNCBvZmZsaW5lCnNlcnZlciBiYXouZXhhbXBsZS5uZXQgbWF4ZGVsYXkgMC40IG9mZmxpbmUK
        filesystem: root
        mode: 0644
        path: /etc/test
$ oc apply -f file.yaml 
machineconfig.machineconfiguration.openshift.io/test-file created
$ oc get node
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-136-182.us-west-2.compute.internal   Ready    worker   51m   v1.18.3+314a8f2
ip-10-0-156-223.us-west-2.compute.internal   Ready    master   59m   v1.18.3+314a8f2
ip-10-0-187-86.us-west-2.compute.internal    Ready    master   61m   v1.18.3+314a8f2
ip-10-0-190-166.us-west-2.compute.internal   Ready    worker   51m   v1.18.3+314a8f2
ip-10-0-192-187.us-west-2.compute.internal   Ready    worker   51m   v1.18.3+314a8f2
ip-10-0-192-67.us-west-2.compute.internal    Ready    master   59m   v1.18.3+314a8f2
$ oc get mc
NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
00-master                                          0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
00-worker                                          0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
01-master-container-runtime                        0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
01-master-kubelet                                  0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
01-worker-container-runtime                        0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
01-worker-kubelet                                  0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
99-master-generated-registries                     0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
99-master-ssh                                                                                 2.2.0             66m
99-worker-generated-registries                     0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
99-worker-ssh                                                                                 2.2.0             66m
rendered-master-af0f1be3b81f422d01a1e9decfcb2e0d   0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
rendered-worker-8639b80075f84fbc1d8f457de0185730   0f33cc64182f6cb69609064e235235214411b648   2.2.0             58m
test-file                                                                                     2.2.0             4s
$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-af0f1be3b81f422d01a1e9decfcb2e0d   True      False      False      3              3                   3                     0                      59m
worker   rendered-worker-8639b80075f84fbc1d8f457de0185730   False     True       False      3              0                   0                     0                      59m
$ watch oc get nodes
$ oc get nodes
NAME                                         STATUS                     ROLES    AGE   VERSION
ip-10-0-136-182.us-west-2.compute.internal   Ready                      worker   54m   v1.18.3+314a8f2
ip-10-0-156-223.us-west-2.compute.internal   Ready                      master   62m   v1.18.3+314a8f2
ip-10-0-187-86.us-west-2.compute.internal    Ready                      master   64m   v1.18.3+314a8f2
ip-10-0-190-166.us-west-2.compute.internal   Ready,SchedulingDisabled   worker   54m   v1.18.3+314a8f2
ip-10-0-192-187.us-west-2.compute.internal   Ready                      worker   54m   v1.18.3+314a8f2
ip-10-0-192-67.us-west-2.compute.internal    Ready                      master   62m   v1.18.3+314a8f2
$  oc get pods -n openshift-machine-config-operator  --field-selector spec.nodeName=ip-10-0-136-182.us-west-2.compute.internal
NAME                          READY   STATUS    RESTARTS   AGE
machine-config-daemon-prcss   2/2     Running   0          54m
$ oc -n openshift-machine-config-operator  logs -f machine-config-daemon-prcss -c machine-config-daemon
I0624 12:34:50.860193    2109 start.go:74] Version: v4.6.0-202006240615.p0-dirty (0f33cc64182f6cb69609064e235235214411b648)
I0624 12:34:50.879695    2109 start.go:84] Calling chroot("/rootfs")
I0624 12:34:50.880099    2109 rpm-ostree.go:369] Running captured: rpm-ostree status --json
I0624 12:34:51.397893    2109 daemon.go:218] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:769432fdae52f909658c471bf9aa8d0f00fb398bb4976bec084c75037d87fa60 (46.82.202006231340-0)
I0624 12:34:51.579657    2109 daemon.go:225] Installed Ignition binary version: 0.35.1
I0624 12:34:51.584101    2109 metrics.go:106] Registering Prometheus metrics
I0624 12:34:51.584218    2109 metrics.go:111] Starting metrics listener on 127.0.0.1:8797
I0624 12:34:51.597250    2109 update.go:1404] Starting to manage node: ip-10-0-136-182.us-west-2.compute.internal
I0624 12:34:51.610269    2109 rpm-ostree.go:369] Running captured: rpm-ostree status
I0624 12:34:51.686602    2109 daemon.go:833] State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:769432fdae52f909658c471bf9aa8d0f00fb398bb4976bec084c75037d87fa60
              CustomOrigin: Managed by machine-config-operator
                   Version: 46.82.202006231340-0 (2020-06-23T13:44:45Z)

  ostree://7858a6e67a225e03ff54e301010c69d5ffb2ab22ecb42c99adef747b92c38016
                   Version: 46.82.202006190240-0 (2020-06-19T02:44:38Z)
I0624 12:34:51.686682    2109 rpm-ostree.go:369] Running captured: journalctl --list-boots
I0624 12:34:51.693002    2109 daemon.go:840] journalctl --list-boots:
-1 41c3d4b88c3748d38bae9e9a5ebaedb5 Wed 2020-06-24 12:31:25 UTC—Wed 2020-06-24 12:33:17 UTC
 0 d92588ab834f4d438e837a69eeb2c8a8 Wed 2020-06-24 12:33:33 UTC—Wed 2020-06-24 12:34:51 UTC
I0624 12:34:51.693086    2109 rpm-ostree.go:369] Running captured: systemctl list-units --state=failed --no-legend
I0624 12:34:51.701223    2109 daemon.go:855] systemd service state: OK
I0624 12:34:51.701281    2109 daemon.go:583] Starting MachineConfigDaemon
I0624 12:34:51.701376    2109 daemon.go:590] Enabling Kubelet Healthz Monitor
E0624 12:34:54.667847    2109 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://172.30.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host
E0624 12:34:54.668268    2109 reflector.go:178] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0: dial tcp 172.30.0.1:443: connect: no route to host
I0624 12:34:55.705693    2109 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ip-10-0-136-182.us-west-2.compute.internal: map[machine.openshift.io/machine:openshift-machine-api/mnguyen46-clbq2-worker-us-west-2a-4scdm volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json
I0624 12:34:55.706245    2109 node.go:45] Setting initial node config: rendered-worker-8639b80075f84fbc1d8f457de0185730
I0624 12:34:55.729637    2109 daemon.go:751] In bootstrap mode
I0624 12:34:55.729657    2109 daemon.go:779] Current+desired config: rendered-worker-8639b80075f84fbc1d8f457de0185730
I0624 12:34:55.747105    2109 daemon.go:1007] No bootstrap pivot required; unlinking bootstrap node annotations
I0624 12:34:55.747154    2109 daemon.go:1045] Validating against pending config rendered-worker-8639b80075f84fbc1d8f457de0185730
I0624 12:34:56.197253    2109 daemon.go:1055] Validated on-disk state
I0624 12:34:56.227245    2109 daemon.go:1095] Completing pending config rendered-worker-8639b80075f84fbc1d8f457de0185730
I0624 12:34:56.227267    2109 update.go:1404] completed update for config rendered-worker-8639b80075f84fbc1d8f457de0185730
I0624 12:34:56.233039    2109 daemon.go:1111] In desired config rendered-worker-8639b80075f84fbc1d8f457de0185730
I0624 13:25:48.307650    2109 update.go:284] Checking Reconcilable for config rendered-worker-8639b80075f84fbc1d8f457de0185730 to rendered-worker-55ee30d203654c470ba6ceb25267baad
I0624 13:25:48.906529    2109 update.go:1404] Starting update from rendered-worker-8639b80075f84fbc1d8f457de0185730 to rendered-worker-55ee30d203654c470ba6ceb25267baad: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false}
I0624 13:25:48.909179    2109 update.go:1404] Update prepared; beginning drain
E0624 13:25:49.008753    2109 daemon.go:336] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-l7kvl, openshift-dns/dns-default-6t6gg, openshift-image-registry/node-ca-lvz5p, openshift-machine-config-operator/machine-config-daemon-prcss, openshift-monitoring/node-exporter-nzcnv, openshift-multus/multus-gdpsz, openshift-multus/network-metrics-daemon-vxf5c, openshift-sdn/ovs-482xz, openshift-sdn/sdn-z55l8
I0624 13:25:49.018616    2109 daemon.go:336] evicting pod openshift-marketplace/community-operators-649bf6c656-nr8z2
I0624 13:25:49.018895    2109 daemon.go:336] evicting pod openshift-image-registry/image-registry-bb566bc48-cwtxp
I0624 13:25:49.019057    2109 daemon.go:336] evicting pod openshift-kube-storage-version-migrator/migrator-5cff5df479-qp7r6
I0624 13:25:49.019197    2109 daemon.go:336] evicting pod openshift-marketplace/certified-operators-755bf4f786-mmjr8
I0624 13:25:49.019296    2109 daemon.go:336] evicting pod openshift-marketplace/redhat-operators-bb484c6cf-hcpnf
I0624 13:25:49.019443    2109 daemon.go:336] evicting pod openshift-marketplace/redhat-marketplace-76897f4998-5rpl9
I0624 13:25:49.019551    2109 daemon.go:336] evicting pod openshift-monitoring/alertmanager-main-2
I0624 13:25:49.019720    2109 daemon.go:336] evicting pod openshift-monitoring/prometheus-k8s-0
I0624 13:25:49.019916    2109 daemon.go:336] evicting pod openshift-service-catalog-removed/openshift-service-catalog-controller-manager-remover-cp5fl
I0624 13:25:49.121281    2109 daemon.go:322] Evicted pod openshift-service-catalog-removed/openshift-service-catalog-controller-manager-remover-cp5fl
I0624 13:25:52.325808    2109 request.go:621] Throttling request took 1.191111118s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-marketplace/pods/community-operators-649bf6c656-nr8z2
I0624 13:26:02.511659    2109 request.go:621] Throttling request took 1.179917301s, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-image-registry/pods/image-registry-bb566bc48-cwtxp
I0624 13:26:02.919524    2109 daemon.go:322] Evicted pod openshift-kube-storage-version-migrator/migrator-5cff5df479-qp7r6
I0624 13:26:03.115977    2109 daemon.go:322] Evicted pod openshift-marketplace/redhat-operators-bb484c6cf-hcpnf
I0624 13:26:03.314676    2109 daemon.go:322] Evicted pod openshift-monitoring/alertmanager-main-2
I0624 13:26:03.515229    2109 daemon.go:322] Evicted pod openshift-marketplace/certified-operators-755bf4f786-mmjr8
I0624 13:26:03.716351    2109 daemon.go:322] Evicted pod openshift-marketplace/redhat-marketplace-76897f4998-5rpl9
I0624 13:26:03.915268    2109 daemon.go:322] Evicted pod openshift-monitoring/prometheus-k8s-0
I0624 13:26:04.115645    2109 daemon.go:322] Evicted pod openshift-marketplace/community-operators-649bf6c656-nr8z2
I0624 13:26:04.314202    2109 daemon.go:322] Evicted pod openshift-image-registry/image-registry-bb566bc48-cwtxp
I0624 13:26:04.314228    2109 update.go:1404] drain complete
I0624 13:26:04.316880    2109 update.go:190] Successful drain took 15.405505366 seconds

Comment 6 errata-xmlrpc 2020-10-27 16:08:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196