Description of problem: The openshift-ansible PR #815 (5th of Nov): https://github.com/openshift/openshift-ansible/pull/815 Exposed a couple of problematic behaviors: https://github.com/openshift/openshift-ansible/issues/943 Setting the nodeIP in the node config creates nodes that self-register using the IP instead of the hostname. As a result all nodes have the kubernetes.io/hostname and the externalID set to the ip address: NAME LABELS STATUS AGE oshift01.eng.lab.tlv.redhat.com kubernetes.io/hostname=10.35.19.229 Ready 8h oshift02.eng.lab.tlv.redhat.com kubernetes.io/hostname=10.35.19.230 Ready 8h How reproducible: 100% Steps to Reproduce: 1. Deploy a cluster Actual results: The nodes labels kubernetes.io/hostname and externalID are configured with the ip address. Expected results: The nodes labels kubernetes.io/hostname and externalID are configured with the hostname.
Reassigning to the installer since this is unrelated to the deployments feature.
Reassigning to the Node Component. The issue here is that when both nodeName and nodeIP are set in the node config, nodeName is ignored and nodeIP is used instead. The end result is that `oc get nodes` lists both the node name and external ID as being the nodeIP rather than the set nodeName. Rather, these commands should still return the set nodeName as the node name and the externalID, and use the nodeIP value when setting the node status. Additionally, this also affects the SDN configuration as well.
I opened bug #1284621 on the Node component. If we end up really moving this to the node we'll have to close one as duplicate.
Based on the discussion referenced it looks like this is a code change. I've assigned https://bugzilla.redhat.com/show_bug.cgi?id=1284621 to Ravi to weigh in. Duping this one. *** This bug has been marked as a duplicate of bug 1284621 ***