Description of problem: node INTERNAL-IP is set to IPv6 address, while others are on IPv4. SDN pod fails to start with the following warning: W0725 08:05:14.492609 2758 subnets.go:147] HostIP "<IPv6_ADDRESS>" for local subnet does not match with nodeIP "<IPv4_ADDRESS>", Waiting for master to update subnet for node "<NODE_FQDN>" Version-Release number of selected component (if applicable): v3.11 Actual results: HostIP is IPv6 while the nodeIP is IPv4. Expected results: both HostIP and nodeIP to be set to the same IP address. Additional info: - Cluster upgrade from 3.0 to 3.11: openshift_ip was used in 3.9 to overwrite the nodeIP address with the IPv4 one. - All nodes have both IPv4 and IPv6 addresses set. Disabling IPv6 is not an option.
I suspect this probably is the same as bug 1696628; if you have both IPv4 and IPv6 addresses configured, then the OpenStack cloud provider code will return multiple InternalIP entries, in which case the kubelet code may mangle the list. We had not intended to backport the fix for that bug to 3.11 because the bug can be worked around there by using the configmap solution. Can you get the output of "oc get nodes -o yaml"? That might help confirm that this is the same bug, and not something OpenStack-specific.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3139