I've been able to reproduce this with a node that has its inventory name set to an ip address. ie: the second node below fails, the first works. [nodes] ose3-master.example.com openshift_node_labels="{'region':'infra','zone':'default'}" openshift_schedulable=true 192.168.122.102 openshift_node_labels="{'region':'primary','zone':'east'}"
My test environment was bad. Now that I've re-provisioned the environment I can no longer re-produce this when specifying the inventory name as an ip.
Created attachment 1184683 [details] /etc/origin, master log and ansible inventory /etc/origin, master log and ansible inventory attached
`oadm policy reconcile-cluster-role-bindings` fixed the issue, existing nodes immediately registered themselves. Now as to why that's necessary, we're still not sure.
This seems to be the result of 3 API servers starting for the first time at the same time. We can work around this in the installer but it'd be nice if the product itself prevented that from being a problem via some sort of locking mechanism. I'll attach logs. Ansible work-around https://github.com/openshift/openshift-ansible/pull/2233
Created attachment 1185156 [details] api server logs
*** Bug 1361313 has been marked as a duplicate of this bug. ***
Fixed upstream in https://github.com/openshift/origin/pull/10099
This has been merged and is in OSE v3.3.0.14 or newer.
Verified with openshift v3.3.0.14 Successed to install HA env and make S2I build.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1933