Description of problem: In the /var/log/messages i have found the following message: Dec 7 14:28:56 node.example.com journal: cat: /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE: No such file or directory Dec 7 14:28:57 node.example.com journal: node/node.example.com annotated I have checked the file sync.yaml and found that it will not be checked if the files exist. (line 115) https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_node_group/files/sync.yaml#L115 Version-Release number of selected component (if applicable): OCP v3.11 Actual results: Dec 7 14:28:56 node.example.com journal: cat: /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE: No such file or directory Expected results: This should not be checked as the file is present.
Correction: Expected results: This should be checked as the file is present.
*** Bug 1657768 has been marked as a duplicate of this bug. ***
Hi team, Any updates?
Can you detail what undesired behavior is actually being experienced? Reading this bug it appears that it's just a log entry emitted periodically to the journal.
im seeing the same issue.
@Scott, Michael: Could you confirm this solution? I already tested in a lab, and I can write a KCS based on this solution. - For install/upgrade: Change line 115 of roles/openshift_node_group/files/sync: Before: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || : Now: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE 2>/dev/null) || : - For a working cluster: $ oc edit daemonset sync -n openshift-node Search for line KUBELET_HOSTNAME_OVERRIDE: Before: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || : Now: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE 2>/dev/null) || : Note: This "issue" was introduced in PR 10343 [1]. Although it's just an error log line, it can generate too many error lines that are not useful. https://github.com/openshift/openshift-ansible/pull/10343
I've just created KCS 3958481 [1] regarding this issue and solution. I'll also submit a PR to fix this issue. [1] https://access.redhat.com/solutions/3958481
https://github.com/openshift/openshift-ansible/pull/11337
I cannot reproduce this issue. Could you provide how to reproduce it? Thanks. openshift-ansible-3.11.70-1.git.0.aa15bf2.el7 # cat /etc/sysconfig/atomic-openshift-node DEBUG_LOGLEVEL=8 # cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE cat: /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE: No such file or directory # oc get daemonset.apps/sync -o yaml | grep KUBELET_HOSTNAME_OVERRIDE KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || : # grep /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE /var/log/messages | grep "No such file" | wc -l 0
Hi Weihua, I cannot get error now in /var/log/messages, but I can get with oc logs: [user@master-0 ~]$ oc get pods NAME READY STATUS RESTARTS AGE sync-dmq8n 1/1 Running 0 1m sync-jkgjn 1/1 Running 0 1m sync-vm4dm 1/1 Running 0 1m [user@master-0 ~]$ oc logs sync-jkgjn cat: /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE: No such file or directory info: Configuration changed, restarting kubelet info: Applying node labels node-role.kubernetes.io/master=true After fix, you shouldn't get any error (I fix with oc edit ds sync): [user@master-0 ~]$ oc edit ds sync -n openshift-node daemonset.extensions/sync edited [user@master-0 ~]$ oc get pods -n openshift-node NAME READY STATUS RESTARTS AGE sync-jkgjn 1/1 Running 0 7m sync-vm4dm 0/1 Terminating 0 7m [user@master-0 ~]$ oc get pods -n openshift-node NAME READY STATUS RESTARTS AGE sync-b6whk 1/1 Running 0 11s sync-gj4vs 0/1 ContainerCreating 0 0s sync-xz2vj 1/1 Running 0 11s [user@master-0 ~]$ oc logs sync-b6whk -n openshift-node info: Configuration changed, restarting kubelet info: Applying node labels node-role.kubernetes.io/infra=true [user@master-0 ~]$
Thanks, Alberto. Fixed. openshift-ansible-3.11.95-1.git.0.d080cce.el7
Fixed. openshift-ansible-3.11.98-1.git.0.3cfa7c3.el7 # oc logs -n openshift-node sync-x2zz5 info: Configuration changed, restarting kubelet info: Applying node labels role=node registry=enabled router=enabled node/ip-172-18-2-224.ec2.internal labeled node/ip-172-18-2-224.ec2.internal annotated node/ip-172-18-2-224.ec2.internal annotated node/ip-172-18-2-224.ec2.internal annotated node/ip-172-18-2-224.ec2.internal annotated
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0636