Bug 1657769
Summary: | sync-pod tried to override the hostname | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Fatima <fshaikh> | |
Component: | Installer | Assignee: | Scott Dodson <sdodson> | |
Status: | CLOSED ERRATA | QA Contact: | Weihua Meng <wmeng> | |
Severity: | low | Docs Contact: | ||
Priority: | low | |||
Version: | 3.11.0 | CC: | algonzal, aos-bugs, cjonagam, jokerman, mgugino, mmccomas, sdodson, stwalter, wmeng | |
Target Milestone: | --- | |||
Target Release: | 3.11.z | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
If an override hostname hadn't been set the sync script generated an error in the sync logs. That error message is no longer present in situations where /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE is not present.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1687803 (view as bug list) | Environment: | ||
Last Closed: | 2019-04-11 05:38:23 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1687803 |
Description
Fatima
2018-12-10 12:11:38 UTC
Correction: Expected results: This should be checked as the file is present. *** Bug 1657768 has been marked as a duplicate of this bug. *** Hi team, Any updates? Hi team, Any updates? Can you detail what undesired behavior is actually being experienced? Reading this bug it appears that it's just a log entry emitted periodically to the journal. im seeing the same issue. @Scott, Michael: Could you confirm this solution? I already tested in a lab, and I can write a KCS based on this solution. - For install/upgrade: Change line 115 of roles/openshift_node_group/files/sync: Before: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || : Now: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE 2>/dev/null) || : - For a working cluster: $ oc edit daemonset sync -n openshift-node Search for line KUBELET_HOSTNAME_OVERRIDE: Before: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || : Now: KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE 2>/dev/null) || : Note: This "issue" was introduced in PR 10343 [1]. Although it's just an error log line, it can generate too many error lines that are not useful. https://github.com/openshift/openshift-ansible/pull/10343 I've just created KCS 3958481 [1] regarding this issue and solution. I'll also submit a PR to fix this issue. [1] https://access.redhat.com/solutions/3958481 I cannot reproduce this issue. Could you provide how to reproduce it? Thanks. openshift-ansible-3.11.70-1.git.0.aa15bf2.el7 # cat /etc/sysconfig/atomic-openshift-node DEBUG_LOGLEVEL=8 # cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE cat: /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE: No such file or directory # oc get daemonset.apps/sync -o yaml | grep KUBELET_HOSTNAME_OVERRIDE KUBELET_HOSTNAME_OVERRIDE=$(cat /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE) || : # grep /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE /var/log/messages | grep "No such file" | wc -l 0 Hi Weihua, I cannot get error now in /var/log/messages, but I can get with oc logs: [user@master-0 ~]$ oc get pods NAME READY STATUS RESTARTS AGE sync-dmq8n 1/1 Running 0 1m sync-jkgjn 1/1 Running 0 1m sync-vm4dm 1/1 Running 0 1m [user@master-0 ~]$ oc logs sync-jkgjn cat: /etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE: No such file or directory info: Configuration changed, restarting kubelet info: Applying node labels node-role.kubernetes.io/master=true After fix, you shouldn't get any error (I fix with oc edit ds sync): [user@master-0 ~]$ oc edit ds sync -n openshift-node daemonset.extensions/sync edited [user@master-0 ~]$ oc get pods -n openshift-node NAME READY STATUS RESTARTS AGE sync-jkgjn 1/1 Running 0 7m sync-vm4dm 0/1 Terminating 0 7m [user@master-0 ~]$ oc get pods -n openshift-node NAME READY STATUS RESTARTS AGE sync-b6whk 1/1 Running 0 11s sync-gj4vs 0/1 ContainerCreating 0 0s sync-xz2vj 1/1 Running 0 11s [user@master-0 ~]$ oc logs sync-b6whk -n openshift-node info: Configuration changed, restarting kubelet info: Applying node labels node-role.kubernetes.io/infra=true [user@master-0 ~]$ Thanks, Alberto. Fixed. openshift-ansible-3.11.95-1.git.0.d080cce.el7 Fixed. openshift-ansible-3.11.98-1.git.0.3cfa7c3.el7 # oc logs -n openshift-node sync-x2zz5 info: Configuration changed, restarting kubelet info: Applying node labels role=node registry=enabled router=enabled node/ip-172-18-2-224.ec2.internal labeled node/ip-172-18-2-224.ec2.internal annotated node/ip-172-18-2-224.ec2.internal annotated node/ip-172-18-2-224.ec2.internal annotated node/ip-172-18-2-224.ec2.internal annotated Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0636 |