Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1521087 - 'oc get nodes -o wide' returns Unknown in the OS-IMAGE field
'oc get nodes -o wide' returns Unknown in the OS-IMAGE field
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Pod (Show other bugs)
3.6.1
Unspecified Unspecified
low Severity medium
: ---
: 3.11.0
Assigned To: Seth Jennings
Weinan Liu
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-12-05 14:23 EST by emahoney
Modified: 2018-10-11 03:19 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-10-11 03:19:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2652 None None None 2018-10-11 03:19 EDT

  None (edit)
Description emahoney 2017-12-05 14:23:06 EST
Description of problem: Post installation, 'oc get nodes -o wide' returns the following:

[root@master1.x.y ~]# oc get nodes -o wide
NAME                             STATUS                     AGE       VERSION             EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION
..8<
3.10.0-693.5.2.el7.x86_64
master1.x.y   Ready,SchedulingDisabled   9d        v1.6.1+5115d708d7   <none>        OpenShift Enterprise   3.10.0-693.5.2.el7.x86_64
node1.x.y   Ready                      9d        v1.6.1+5115d708d7   <none>        Unknown                3.10.0-693.5.2.el7.x86_64
..8<

So the 'good' host is returning OS-IMAGE as 'Openshift Enterprise' and the 'bad' node is returning Unknown. Looking at /etc/os-release, this is what is shown on the bad node:

[root@node1.x.y ~]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.4 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.4"
PRETTY_NAME=OpenShift
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.4:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.4
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.4"

We notice on the 'good' node, the PRETTY_NAME='OpenShift Enterprise'. We have confirmed that openshift_deployment_type=openshift-enterprise is in the hosts file from the installation and the /etc/ansible/facts.d/openshift.fact on the nodes are all accurate/correct. 

As a test, we modified the PRETTY_NAME= on the 'bad' node to show PRETTY_NAME='OpenShift Enterprise'. This led 'oc get nodes -o wide' to report the correct OS-IMAGE. When the node is rebooted however, the os-release and OS-IMAGE reverts back to the previous configuration. 

Version-Release number of selected component (if applicable):
atomic-openshift-3.6.173.0.21-1.git.0.f95b0e7.el7.x86_64

How reproducible: Have attemtped to reproduce this in-house, however we have not been able to reproduce this issue. 


Steps to Reproduce:
1.n/a
2.
3.

Actual results: oc get nodes -o wide returns Unknown in the OS-IMAGE field


Expected results: oc get nodes -o wide returns OpenShift Enterprise in the OS-IMAGE field


Additional info:
Comment 4 Juan Vallejo 2018-04-18 15:42:49 EDT
@Michal wondering if you had seen this before / if it could be a node issue?
Comment 5 Maciej Szulik 2018-04-20 11:20:29 EDT
Honestly this looks like an installer problem. Moving to them, it's not related to CLI in any way.
Comment 14 Scott Dodson 2018-06-19 11:18:08 EDT
*** Bug 1585653 has been marked as a duplicate of this bug. ***
Comment 17 Weinan Liu 2018-08-27 04:26:29 EDT
Now the "bad" node shows exactly what in PRETTY_NAME 

[root@ip-172-18-6-174 ~]# cat /etc/os-release|grep PRETTY
PRETTY_NAME="OpenShift"


[root@ip-172-18-14-31 ~]# oc get nodes -o wide
NAME                           STATUS    ROLES     AGE       VERSION           INTERNAL-IP    EXTERNAL-IP      OS-IMAGE                                      KERNEL-VERSION               CONTAINER-RUNTIME
ip-172-18-10-88.ec2.internal   Ready     compute   1h        v1.11.0+d4cacc0   172.18.10.88   34.207.213.202   Red Hat Enterprise Linux Server 7.5 (Maipo)   3.10.0-862.11.6.el7.x86_64   docker://1.13.1
ip-172-18-14-31.ec2.internal   Ready     master    2h        v1.11.0+d4cacc0   172.18.14.31   54.89.133.133    Red Hat Enterprise Linux Server 7.5 (Maipo)   3.10.0-862.11.6.el7.x86_64   docker://1.13.1
ip-172-18-6-174.ec2.internal   Ready     <none>    1h        v1.11.0+d4cacc0   172.18.6.174   34.207.103.156   OpenShift                                     3.10.0-862.11.6.el7.x86_64   docker://1.13.1

[root@ip-172-18-6-174 ~]# oc version
oc v3.11.0-0.22.0
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

[root@ip-172-18-14-31 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)
Comment 19 errata-xmlrpc 2018-10-11 03:19:06 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652

Note You need to log in before you can comment on or make changes to this bug.