In an IPv6 Azure cluster, there will be error messages in the ovnkube-node logs like: time="2020-03-05T21:14:51Z" level=error msg="Error in modifying service: failed to add iptables nat/OVN-KUBE-NODEPORT rule \"-d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863\": running [/usr/sbin/ip6tables -t nat -C OVN-KUBE-NODEPORT -d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863 --wait]: exit status 2: ip6tables v1.8.2 (nf_tables): Bad IP address \"fd99::2:31863\"\n\nTry `ip6tables -h' or 'ip6tables --help' for more information.\n" (The "Bad IP address" being the relevant bit.) We need to fix those (by putting brackets around the IP).
> $ oc get svc > NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE > hello-pod LoadBalancer fd02::f65d 40.89.244.147 5678:31891/TCP 2m54s So the external IP here is IPv4. Was that value filled in by you or by OCP? If the latter, what does "oc get node -o yaml" show?
(In reply to Dan Winship from comment #4) > > $ oc get svc > > NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE > > hello-pod LoadBalancer fd02::f65d 40.89.244.147 5678:31891/TCP 2m54s > > So the external IP here is IPv4. Was that value filled in by you or by OCP? > If the latter, what does "oc get node -o yaml" show? Hmm..this was filled in by OCP. Attaching node yaml output
Created attachment 1674875 [details] oc get node -oyaml
Steps taken: 1) oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/networking/ping_for_pod_containerPort.json 2) oc create service loadbalancer hello-pod --tcp=5678:8080
Oh. Sorry, I must have been testing a build with additional patches. LoadBalancers don't currently work and at this point aren't actually expected to work At any rate, you can see that the particular part I was trying to fix with this PR is actually fixed. In the original report: time="2020-03-05T21:14:51Z" level=error msg="Error in modifying service: failed to add iptables nat/OVN-KUBE-NODEPORT rule \"-d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863\": running [/usr/sbin/ip6tables -t nat -C OVN-KUBE-NODEPORT -d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863 --wait]: exit status 2: ip6tables v1.8.2 (nf_tables): Bad IP address \"fd99::2:31863\"\n\nTry `ip6tables -h' or 'ip6tables --help' for more information.\n" In particular: Bad IP address \"fd99::2:31863\" It's complaining about the "--to-destination fd99::2:31863", because the code was just appending the port number (31863) directly to the IP (fd99::2). In your output: E0330 20:07:52.815832 2087 gateway_localnet.go:381] Error in modifying service: failed to add iptables nat/OVN-KUBE-NODEPORT rule "-d 40.89.244.147 -p TCP --dport 5678 -j DNAT --to-destination [fd99::2]:31891": running [/usr/sbin/ip6tables -t nat -C OVN-KUBE-NODEPORT -d 40.89.244.147 -p TCP --dport 5678 -j DNAT --to-destination [fd99::2]:31891 --wait]: exit status 2: ip6tables v1.8.4 (nf_tables): host/network `40.89.244.147' not found The --to-destination is now correct ("[fd99::2]:31891", with brackets around the IP) but it's complaining about something else (which would need to be fixed in origin, not ovn-kubernetes). So we can call this VERIFIED, but I'm going to close the 4.3 backport bug since it doesn't matter at this point. (The 4.4 backport already merged.)
Thanks for reasoning and explanation, Dan. Make complete sense.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409