Bug 1844613 - Kubelet complains about length of metadata.labels when registering nodes with long hostnames.
Summary: Kubelet complains about length of metadata.labels when registering nodes with...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RHCOS
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Ben Howard
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On:
Blocks: 1854436
TreeView+ depends on / blocked
 
Reported: 2020-06-05 19:01 UTC by Jeremiah Stuever
Modified: 2020-10-27 16:05 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1854436 (view as bug list)
Environment:
Last Closed: 2020-10-27 16:05:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:05:53 UTC

Description Jeremiah Stuever 2020-06-05 19:01:05 UTC
Description of problem:
When a GCP cluster is installed using a long cluster_name, the kublet on the masters complains that metadata.labels is too long when registering nodes.

How reproducible:
Always

Steps to Reproduce:
1. Create a GCP cluster with a long cluster_name such that the hostnames of the masters are exactly (and maybe greater than) 64 characters.
2.
3.

Actual results:

time="2020-06-05T17:47:35Z" level=info msg="Waiting up to 20m0s for the Kubernetes API at https://api.ci-op-br5djjk9-d5086.origin-ci-int-gce.dev.openshift.com:6443..."
time="2020-06-05T18:07:35Z" level=error msg="Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get https://api.ci-op-br5djjk9-d5086.origin-ci-int-gce.dev.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators: dial tcp 35.237.159.205:6443: connect: connection refused"
time="2020-06-05T18:07:35Z" level=info msg="Use the following commands to gather logs from the cluster"
time="2020-06-05T18:07:35Z" level=info msg="openshift-install gather bootstrap --help"
time="2020-06-05T18:07:35Z" level=fatal msg="failed waiting for Kubernetes API: Get https://api.ci-op-br5djjk9-d5086.origin-ci-int-gce.dev.openshift.com:6443/version?timeout=32s: dial tcp 35.237.159.205:6443: connect: connection refused"

control-plane/10.0.0.4/journals/kubelet.log:

Jun 05 17:48:09 ci-op-br5djjk9-d5086-lf6kv-m-1.c.openshift-gce-devel-ci.internal hyperkube[1530]: E0605 17:48:09.009479    1530 kubelet_node_status.go:92] Unable to register node "ci-op-br5djjk9-d5086-lf6kv-m-1.c.openshift-gce-devel-ci.internal" with API server: Node "ci-op-br5djjk9-d5086-lf6kv-m-1.c.openshift-gce-devel-ci.internal" is invalid: metadata.labels: Invalid value: "ci-op-br5djjk9-d5086-lf6kv-m-1.c.openshift-gce-devel-ci.internal": must be no more than 63 characters

Expected results:

Cluster should install as expected.

Additional info:

This was previously masked by a limit on cluster_name length, which was relaxed with https://github.com/openshift/installer/pull/3544

Comment 2 Micah Abbott 2020-06-08 18:39:57 UTC
Per https://github.com/openshift/machine-config-operator/pull/1711, RHCOS will have a NetworkManager dispatcher script that will truncate hostnames down to 63 characters to avoid the long name problem on GCP.  See also https://bugzilla.redhat.com/show_bug.cgi?id=1809345

I'm not sure what else can be done on the RHCOS side to affect the metadata labels that the kublet is using other than further truncating the hostname, but that doesn't seem prudent.

I found the following WIP PR to the installer that may address this some of this, so it's unclear why this is assigned to RHCOS?  https://github.com/openshift/installer/pull/3713

However, it already got retargted from Installer, so maybe I am reading the intent of that PR incorrectly.

@Jeremiah can you fill in some details here?


Setting this as medium priority and targeting 4.6

Comment 3 Jeremiah Stuever 2020-06-08 20:08:17 UTC
This is clearly happening before the dispatcher script gets a chance to run.... the hostname in the logs is 64 characters long.

$ echo -n "ci-op-br5djjk9-d5086-lf6kv-m-1.c.openshift-gce-devel-ci.internal" | wc -c
64

I assume MCO only runs after the node has joined the cluster... and this error is happening when the node tries to join the cluster.

The changes in installer PR 3713 bring the hostnames in GCP UPI on par with GCP IPI. I was hoping this might side-step the problem after realizing GCP IPI did not experience this error. Interestingly, the IPI hostnames are > 64 characters long. The latest run on this PR supports this; bootstrapping completed successfully without running into his error. I would assume somewhere the hostname (or label) is being truncated at 64 characters when it should be 63.

Comment 4 Ben Howard 2020-06-10 16:27:37 UTC
Without a console log, I'm unable to determine if this an OS bug (ordering problem) or something else.

Comment 11 Ben Howard 2020-07-06 22:45:22 UTC
Actually, after spending way more time than I care to confess, I tracked this down to dhclient _truncating_ the hostname to 64 characters.

# printf $(</proc/sys/kernel/hostname) | wc -m
64

# cat /proc/sys/kernel/hostname
c30dabda1c28ac3a52fecc76a49da356c89c09b08d8c2765752f6dd8926bfe2.

On 4.6 RHCOS is using NetworkManager in the initramfs. Pre 4.6, first-boot uses NetworkManager. What's unclear to me is _when_ in the boot this happening. On second boot the 90-long-hostname NM patch takes effect and would prevent the hostname from being longer than 63 characters. 

I spend the afternoon on a deep dive and found that 4.6 is unaffected because it uses NM in the intramfs AND because of https://github.com/openshift/machine-config-operator/pull/1711.

We might need to revert https://github.com/openshift/installer/pull/3713 for the installer. We have a couple of ways to "fix" this but they require changing the behavior of RHCOS:
- add an initramfs dhclient script that truncates the name
- write the hostname via Ignition
- change Afterburn for 4.{3,5}
- revert Installer #3713

Comment 12 Ben Howard 2020-07-06 22:59:29 UTC
This only affects 4.5 and earlier, not 4.6. Given that this could prevent anyone from installing OpenShift with a long-host name on GCP, I'm calling this  "high" and targeting it at 4.5 (of course, please feel free to tell me I'm wrong).

Comment 13 Scott Dodson 2020-07-07 13:45:24 UTC
Believed not to be an issue in 4.6, lets have QE verify that though, MODIFIED so the automation flows pick this up.

Comment 21 Michael Nguyen 2020-07-16 03:05:37 UTC
Verified on 4.6.0-0.nightly-2020-07-15-170241

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-07-15-170241   True        False         44m     Cluster version is 4.6.0-0.nightly-2020-07-15-170241


IPI installed successful
-----------------------------------------------
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 94% complete 
DEBUG Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, cloud-credential, cluster-autoscaler, console, csi-snapshot-controller, image-registry, ingress, insights, kube-storage-version-migrator, machine-api, marketplace, monitoring, openshift-samples, storage 
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 94% complete 
DEBUG Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, cloud-credential, cluster-autoscaler, console, csi-snapshot-controller, image-registry, ingress, insights, kube-storage-version-migrator, machine-api, marketplace, monitoring, openshift-samples, storage 
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 96% complete 
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 97% complete 
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 97% complete 
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 98% complete, waiting on authentication, cluster-autoscaler, console, csi-snapshot-controller, image-registry, ingress, kube-storage-version-migrator, machine-api, monitoring 
DEBUG Still waiting for the cluster to initialize: Working towards 4.6.0-0.nightly-2020-07-15-170241: 99% complete 
DEBUG Cluster is initialized                       
INFO Waiting up to 10m0s for the openshift-console route to be created... 
DEBUG Route found in openshift-console namespace: console 
DEBUG Route found in openshift-console namespace: downloads 
DEBUG OpenShift console route is created           
INFO Install complete!                       


Hostname is truncated successfully and no errors about metadata.labels in kubelet logs
-----------------------------------------------
$ oc get nodes
NAME                                       STATUS   ROLES    AGE   VERSION
yunjiang00123456789-cvndz-master-0         Ready    master   59m   v1.18.3+a34fde4
yunjiang00123456789-cvndz-master-1         Ready    master   59m   v1.18.3+a34fde4
yunjiang00123456789-cvndz-master-2         Ready    master   59m   v1.18.3+a34fde4
yunjiang00123456789-cvndz-worker-a-gbzhl   Ready    worker   33m   v1.18.3+a34fde4
yunjiang00123456789-cvndz-worker-b-p42w5   Ready    worker   33m   v1.18.3+a34fde4
yunjiang00123456789-cvndz-worker-c-8pxv8   Ready    worker   32m   v1.18.3+a34fde4

$ oc debug node/yunjiang00123456789-cvndz-worker-a-gbzhl
Starting pod/yunjiang00123456789-cvndz-worker-a-gbzhl-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# journalctl -u kubelet --no-pager | grep "metadata.labels"
sh-4.4# ls
bin  boot  dev	etc  home  lib	lib64  media  mnt  opt	ostree	proc  root  run  sbin  srv  sys  sysroot  tmp  usr  var
sh-4.4# 
sh-4.4# journalctl -u node-valid-hostname.service
-- Logs begin at Thu 2020-07-16 02:02:42 UTC, end at Thu 2020-07-16 02:40:57 UTC. --
Jul 16 02:03:00 localhost systemd[1]: Starting Ensure the node hostname is valid for the cluster...
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Ensure the node hostname is valid for the cluster.
Jul 16 02:04:19 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Stopped Ensure the node hostname is valid for the cluster.
Jul 16 02:04:19 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: node-valid-hostname.service: Consumed 21ms CPU time
-- Reboot --
Jul 16 02:04:46 localhost systemd[1]: Starting Ensure the node hostname is valid for the cluster...
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Ensure the node hostname is valid for the cluster.
Jul 16 02:38:31 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Stopped Ensure the node hostname is valid for the cluster.
Jul 16 02:38:31 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: node-valid-hostname.service: Consumed 0 CPU time
-- Reboot --
Jul 16 02:39:23 localhost systemd[1]: Starting Ensure the node hostname is valid for the cluster...
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Ensure the node hostname is valid for the cluster.
sh-4.4# journalctl --no-pager | grep -20 "using trunacated hostname"
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8890] dhcp4 (ens4): option rfc3442_classless_static_routes => '10.0.32.1/32 0.0.0.0 0.0.0.0/0 10.0.32.1'
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8890] dhcp4 (ens4): option routers              => '10.0.32.1'
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8890] dhcp4 (ens4): option subnet_mask          => '255.255.255.255'
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8890] dhcp4 (ens4): state changed unknown -> bound
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8909] device (ens4): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8945] device (ens4): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8948] device (ens4): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8954] manager: NetworkManager state is now CONNECTED_LOCAL
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8963] manager: NetworkManager state is now CONNECTED_SITE
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8964] policy: set 'Wired Connection' (ens4) as default for IPv4 routing and DNS
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8966] policy: set-hostname: set hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal' (from DHCPv4)
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8974] device (ens4): Activation: successful, device activated.
Jul 16 02:03:01 localhost dbus-daemon[1402]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.7' (uid=0 pid=1578 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0")
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8982] manager: NetworkManager state is now CONNECTED_GLOBAL
Jul 16 02:03:01 localhost dbus-daemon[1402]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found.
Jul 16 02:03:01 localhost NetworkManager[1578]: <info>  [1594864981.8991] manager: startup complete
Jul 16 02:03:01 localhost NetworkManager[1578]: <warn>  [1594864981.8996] hostname: couldn't set the system hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal' using hostnamed: GDBus.Error:org.freedesktop.DBus.Error.InvalidArgs: Invalid hostname 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal'
Jul 16 02:03:01 localhost NetworkManager[1578]: <warn>  [1594864981.8996] policy: set-hostname: couldn't set the system hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal': (1) Operation not permitted
Jul 16 02:03:01 localhost NetworkManager[1578]: <warn>  [1594864981.8996] policy: set-hostname: you should use hostnamed when systemd hardening is in effect!
Jul 16 02:03:01 localhost systemd[1]: Started Network Manager Wait Online.
Jul 16 02:03:01 localhost 90-long-hostname[1648]: yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal is longer than 63 characters, using trunacated hostname
Jul 16 02:03:01 localhost 90-long-hostname[1650]: setting transient hostname to yunjiang00123456789-cvndz-worker-a-gbzhl
Jul 16 02:03:01 yunjiang00123456789-cvndz-worker-a-gbzhl systemd-hostnamed[1589]: Changed host name to 'yunjiang00123456789-cvndz-worker-a-gbzhl'
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl bash[1656]: node identified as yunjiang00123456789-cvndz-worker-a-gbzhl
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Ensure the node hostname is valid for the cluster.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Reached target Network is Online.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting Generate /run/issue.d/console-login-helper-messages.issue...
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting Machine Config Daemon Pull...
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting Resize root partition...
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Generate /run/issue.d/console-login-helper-messages.issue.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting RPC Bind...
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting Permit User Sessions...
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Permit User Sessions.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Getty on tty1.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Serial Getty on ttyS0.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl rpc.statd[1697]: Version 2.3.3 starting
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl rpc.statd[1697]: Flags: TI-RPC
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Reached target Login Prompts.
Jul 16 02:03:02 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started RPC Bind.
--
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9009] dhcp4 (ens4): option rfc3442_classless_static_routes => '10.0.32.1/32 0.0.0.0 0.0.0.0/0 10.0.32.1'
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9009] dhcp4 (ens4): option routers              => '10.0.32.1'
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9009] dhcp4 (ens4): option subnet_mask          => '255.255.255.255'
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9009] dhcp4 (ens4): state changed unknown -> bound
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9026] device (ens4): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9047] device (ens4): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9050] device (ens4): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9055] manager: NetworkManager state is now CONNECTED_LOCAL
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9067] manager: NetworkManager state is now CONNECTED_SITE
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9069] policy: set 'Wired Connection' (ens4) as default for IPv4 routing and DNS
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9072] policy: set-hostname: set hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal' (from DHCPv4)
Jul 16 02:04:47 localhost dbus-daemon[1192]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.6' (uid=0 pid=1364 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0")
Jul 16 02:04:47 localhost dbus-daemon[1192]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found.
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9097] device (ens4): Activation: successful, device activated.
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9105] manager: NetworkManager state is now CONNECTED_GLOBAL
Jul 16 02:04:47 localhost NetworkManager[1364]: <info>  [1594865087.9114] manager: startup complete
Jul 16 02:04:47 localhost NetworkManager[1364]: <warn>  [1594865087.9119] hostname: couldn't set the system hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal' using hostnamed: GDBus.Error:org.freedesktop.DBus.Error.InvalidArgs: Invalid hostname 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal'
Jul 16 02:04:47 localhost NetworkManager[1364]: <warn>  [1594865087.9119] policy: set-hostname: couldn't set the system hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal': (1) Operation not permitted
Jul 16 02:04:47 localhost NetworkManager[1364]: <warn>  [1594865087.9119] policy: set-hostname: you should use hostnamed when systemd hardening is in effect!
Jul 16 02:04:47 localhost systemd[1]: Started Network Manager Wait Online.
Jul 16 02:04:47 localhost 90-long-hostname[1438]: yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal is longer than 63 characters, using trunacated hostname
Jul 16 02:04:47 localhost 90-long-hostname[1440]: setting transient hostname to yunjiang00123456789-cvndz-worker-a-gbzhl
Jul 16 02:04:47 yunjiang00123456789-cvndz-worker-a-gbzhl systemd-hostnamed[1379]: Changed host name to 'yunjiang00123456789-cvndz-worker-a-gbzhl'
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl bash[1446]: node identified as yunjiang00123456789-cvndz-worker-a-gbzhl
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Ensure the node hostname is valid for the cluster.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Reached target Network is Online.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting OpenSSH server daemon...
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting CRI-O Auto Update Script...
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting RPC Bind...
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl sshd[1449]: Server listening on 0.0.0.0 port 22.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl sshd[1449]: Server listening on :: port 22.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started OpenSSH server daemon.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started RPC Bind.
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl rpc.statd[1452]: Version 2.3.3 starting
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl rpc.statd[1452]: Flags: TI-RPC
Jul 16 02:04:48 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Jul 16 02:04:49 yunjiang00123456789-cvndz-worker-a-gbzhl crio[1450]: time="2020-07-16 02:04:49.071548782Z" level=info msg="Version file /var/run/crio/version not found: open /var/run/crio/version: no such file or directory: triggering wipe of containers"
Jul 16 02:04:49 yunjiang00123456789-cvndz-worker-a-gbzhl crio[1450]: time="2020-07-16 02:04:49.071757721Z" level=info msg="Version file /var/lib/crio/version not found: open /var/lib/crio/version: no such file or directory: triggering wipe of images"
Jul 16 02:04:49 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started CRI-O Auto Update Script.
--
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4740] dhcp4 (ens4): option rfc3442_classless_static_routes => '10.0.32.1/32 0.0.0.0 0.0.0.0/0 10.0.32.1'
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4740] dhcp4 (ens4): option routers              => '10.0.32.1'
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4740] dhcp4 (ens4): option subnet_mask          => '255.255.255.255'
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4740] dhcp4 (ens4): state changed unknown -> bound
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4764] device (ens4): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4797] device (ens4): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4800] device (ens4): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4807] manager: NetworkManager state is now CONNECTED_LOCAL
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4821] manager: NetworkManager state is now CONNECTED_SITE
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4822] policy: set 'Wired Connection' (ens4) as default for IPv4 routing and DNS
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4824] policy: set-hostname: set hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal' (from DHCPv4)
Jul 16 02:39:24 localhost dbus-daemon[1140]: [system] Activating via systemd: service name='org.freedesktop.resolve1' unit='dbus-org.freedesktop.resolve1.service' requested by ':1.6' (uid=0 pid=1422 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0")
Jul 16 02:39:24 localhost dbus-daemon[1140]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found.
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4900] device (ens4): Activation: successful, device activated.
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4908] manager: NetworkManager state is now CONNECTED_GLOBAL
Jul 16 02:39:24 localhost NetworkManager[1422]: <info>  [1594867164.4915] manager: startup complete
Jul 16 02:39:24 localhost NetworkManager[1422]: <warn>  [1594867164.4920] hostname: couldn't set the system hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal' using hostnamed: GDBus.Error:org.freedesktop.DBus.Error.InvalidArgs: Invalid hostname 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal'
Jul 16 02:39:24 localhost NetworkManager[1422]: <warn>  [1594867164.4920] policy: set-hostname: couldn't set the system hostname to 'yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal': (1) Operation not permitted
Jul 16 02:39:24 localhost NetworkManager[1422]: <warn>  [1594867164.4920] policy: set-hostname: you should use hostnamed when systemd hardening is in effect!
Jul 16 02:39:24 localhost systemd[1]: Started Network Manager Wait Online.
Jul 16 02:39:24 localhost 90-long-hostname[1539]: yunjiang00123456789-cvndz-worker-a-gbzhl.c.openshift-gce-devel-ci.internal is longer than 63 characters, using trunacated hostname
Jul 16 02:39:24 localhost 90-long-hostname[1541]: setting transient hostname to yunjiang00123456789-cvndz-worker-a-gbzhl
Jul 16 02:39:24 yunjiang00123456789-cvndz-worker-a-gbzhl systemd-hostnamed[1436]: Changed host name to 'yunjiang00123456789-cvndz-worker-a-gbzhl'
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl bash[1547]: node identified as yunjiang00123456789-cvndz-worker-a-gbzhl
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started Ensure the node hostname is valid for the cluster.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Reached target Network is Online.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting OpenSSH server daemon...
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting CRI-O Auto Update Script...
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl sshd[1549]: Server listening on 0.0.0.0 port 22.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl sshd[1549]: Server listening on :: port 22.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started OpenSSH server daemon.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Starting RPC Bind...
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started RPC Bind.
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl rpc.statd[1553]: Version 2.3.3 starting
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl rpc.statd[1553]: Flags: TI-RPC
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Jul 16 02:39:25 yunjiang00123456789-cvndz-worker-a-gbzhl crio[1550]: time="2020-07-16 02:39:25.511604712Z" level=info msg="Version file /var/run/crio/version not found: open /var/run/crio/version: no such file or directory: triggering wipe of containers"
Jul 16 02:39:26 yunjiang00123456789-cvndz-worker-a-gbzhl crio[1550]: time="2020-07-16 02:39:26.192069648Z" level=info msg="Wiping containers"
Jul 16 02:39:26 yunjiang00123456789-cvndz-worker-a-gbzhl crio[1550]: time="2020-07-16 02:39:26.216986174Z" level=info msg="Deleted container 7e95ba6ed3e233d5e4e4bbc6ac0288f0146b8be5d2036825f34400a58eda59e2"
sh-4.4# 
sh-4.4# hostname     
yunjiang00123456789-cvndz-worker-a-gbzhl

Comment 23 errata-xmlrpc 2020-10-27 16:05:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.