Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1521087

Summary: 'oc get nodes -o wide' returns Unknown in the OS-IMAGE field
Product: OpenShift Container Platform Reporter: emahoney
Component: NodeAssignee: Seth Jennings <sjenning>
Status: CLOSED ERRATA QA Contact: Weinan Liu <weinliu>
Severity: medium Docs Contact:
Priority: low    
Version: 3.6.1CC: aos-bugs, ccoleman, dmoessne, jokerman, maszulik, mfojtik, mmagnani, mmccomas, rkshirsa, sdodson, wmeng
Target Milestone: ---   
Target Release: 3.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-11 07:19:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description emahoney 2017-12-05 19:23:06 UTC
Description of problem: Post installation, 'oc get nodes -o wide' returns the following:

[root.y ~]# oc get nodes -o wide
NAME                             STATUS                     AGE       VERSION             EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION
..8<
3.10.0-693.5.2.el7.x86_64
master1.x.y   Ready,SchedulingDisabled   9d        v1.6.1+5115d708d7   <none>        OpenShift Enterprise   3.10.0-693.5.2.el7.x86_64
node1.x.y   Ready                      9d        v1.6.1+5115d708d7   <none>        Unknown                3.10.0-693.5.2.el7.x86_64
..8<

So the 'good' host is returning OS-IMAGE as 'Openshift Enterprise' and the 'bad' node is returning Unknown. Looking at /etc/os-release, this is what is shown on the bad node:

[root.y ~]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.4 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.4"
PRETTY_NAME=OpenShift
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.4:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.4
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.4"

We notice on the 'good' node, the PRETTY_NAME='OpenShift Enterprise'. We have confirmed that openshift_deployment_type=openshift-enterprise is in the hosts file from the installation and the /etc/ansible/facts.d/openshift.fact on the nodes are all accurate/correct. 

As a test, we modified the PRETTY_NAME= on the 'bad' node to show PRETTY_NAME='OpenShift Enterprise'. This led 'oc get nodes -o wide' to report the correct OS-IMAGE. When the node is rebooted however, the os-release and OS-IMAGE reverts back to the previous configuration. 

Version-Release number of selected component (if applicable):
atomic-openshift-3.6.173.0.21-1.git.0.f95b0e7.el7.x86_64

How reproducible: Have attemtped to reproduce this in-house, however we have not been able to reproduce this issue. 


Steps to Reproduce:
1.n/a
2.
3.

Actual results: oc get nodes -o wide returns Unknown in the OS-IMAGE field


Expected results: oc get nodes -o wide returns OpenShift Enterprise in the OS-IMAGE field


Additional info:

Comment 4 Juan Vallejo 2018-04-18 19:42:49 UTC
@Michal wondering if you had seen this before / if it could be a node issue?

Comment 5 Maciej Szulik 2018-04-20 15:20:29 UTC
Honestly this looks like an installer problem. Moving to them, it's not related to CLI in any way.

Comment 14 Scott Dodson 2018-06-19 15:18:08 UTC
*** Bug 1585653 has been marked as a duplicate of this bug. ***

Comment 17 Weinan Liu 2018-08-27 08:26:29 UTC
Now the "bad" node shows exactly what in PRETTY_NAME 

[root@ip-172-18-6-174 ~]# cat /etc/os-release|grep PRETTY
PRETTY_NAME="OpenShift"


[root@ip-172-18-14-31 ~]# oc get nodes -o wide
NAME                           STATUS    ROLES     AGE       VERSION           INTERNAL-IP    EXTERNAL-IP      OS-IMAGE                                      KERNEL-VERSION               CONTAINER-RUNTIME
ip-172-18-10-88.ec2.internal   Ready     compute   1h        v1.11.0+d4cacc0   172.18.10.88   34.207.213.202   Red Hat Enterprise Linux Server 7.5 (Maipo)   3.10.0-862.11.6.el7.x86_64   docker://1.13.1
ip-172-18-14-31.ec2.internal   Ready     master    2h        v1.11.0+d4cacc0   172.18.14.31   54.89.133.133    Red Hat Enterprise Linux Server 7.5 (Maipo)   3.10.0-862.11.6.el7.x86_64   docker://1.13.1
ip-172-18-6-174.ec2.internal   Ready     <none>    1h        v1.11.0+d4cacc0   172.18.6.174   34.207.103.156   OpenShift                                     3.10.0-862.11.6.el7.x86_64   docker://1.13.1

[root@ip-172-18-6-174 ~]# oc version
oc v3.11.0-0.22.0
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

[root@ip-172-18-14-31 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Comment 19 errata-xmlrpc 2018-10-11 07:19:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652