Bug 1810814 - fix ovn-kubernetes iptables loadbalancer rules for IPv6
Summary: fix ovn-kubernetes iptables loadbalancer rules for IPv6
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.5.0
Assignee: Dan Winship
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1810816
TreeView+ depends on / blocked
 
Reported: 2020-03-05 23:06 UTC by Dan Winship
Modified: 2020-07-13 17:18 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1810816 1810817 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:18:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
oc get node -oyaml (57.91 KB, text/plain)
2020-03-30 23:30 UTC, Anurag saxena
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 112 0 None closed Bug 1810814: CARRY: ovn: fix cloud load balancer rules for IPv6 2021-02-16 14:48:51 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:18:54 UTC

Description Dan Winship 2020-03-05 23:06:21 UTC
In an IPv6 Azure cluster, there will be error messages in the ovnkube-node logs like:

time="2020-03-05T21:14:51Z" level=error msg="Error in modifying service: failed to add iptables nat/OVN-KUBE-NODEPORT rule \"-d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863\": running [/usr/sbin/ip6tables -t nat -C OVN-KUBE-NODEPORT -d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863 --wait]: exit status 2: ip6tables v1.8.2 (nf_tables): Bad IP address \"fd99::2:31863\"\n\nTry `ip6tables -h' or 'ip6tables --help' for more information.\n"


(The "Bad IP address" being the relevant bit.)

We need to fix those (by putting brackets around the IP).

Comment 4 Dan Winship 2020-03-30 21:16:57 UTC
> $ oc get svc
> NAME        TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)          AGE
> hello-pod   LoadBalancer   fd02::f65d   40.89.244.147   5678:31891/TCP   2m54s

So the external IP here is IPv4. Was that value filled in by you or by OCP? If the latter, what does "oc get node -o yaml" show?

Comment 5 Anurag saxena 2020-03-30 23:30:14 UTC
(In reply to Dan Winship from comment #4)
> > $ oc get svc
> > NAME        TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)          AGE
> > hello-pod   LoadBalancer   fd02::f65d   40.89.244.147   5678:31891/TCP   2m54s
> 
> So the external IP here is IPv4. Was that value filled in by you or by OCP?
> If the latter, what does "oc get node -o yaml" show?

Hmm..this was filled in by OCP. Attaching node yaml output

Comment 6 Anurag saxena 2020-03-30 23:30:52 UTC
Created attachment 1674875 [details]
oc get node -oyaml

Comment 7 Anurag saxena 2020-03-30 23:31:40 UTC
Steps taken:

1) oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/networking/ping_for_pod_containerPort.json
2) oc create service loadbalancer hello-pod --tcp=5678:8080

Comment 8 Dan Winship 2020-03-31 12:44:44 UTC
Oh. Sorry, I must have been testing a build with additional patches. LoadBalancers don't currently work and at this point aren't actually expected to work


At any rate, you can see that the particular part I was trying to fix with this PR is actually fixed. In the original report:

time="2020-03-05T21:14:51Z" level=error msg="Error in modifying service: failed to add iptables nat/OVN-KUBE-NODEPORT rule \"-d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863\": running [/usr/sbin/ip6tables -t nat -C OVN-KUBE-NODEPORT -d 2603:1030:b:3::1f -p TCP --dport 80 -j DNAT --to-destination fd99::2:31863 --wait]: exit status 2: ip6tables v1.8.2 (nf_tables): Bad IP address \"fd99::2:31863\"\n\nTry `ip6tables -h' or 'ip6tables --help' for more information.\n"

In particular:

  Bad IP address \"fd99::2:31863\"

It's complaining about the "--to-destination fd99::2:31863", because the code was just appending the port number (31863) directly to the IP (fd99::2).

In your output:

E0330 20:07:52.815832    2087 gateway_localnet.go:381] Error in modifying service: failed to add iptables nat/OVN-KUBE-NODEPORT rule "-d 40.89.244.147 -p TCP --dport 5678 -j DNAT --to-destination [fd99::2]:31891": running [/usr/sbin/ip6tables -t nat -C OVN-KUBE-NODEPORT -d 40.89.244.147 -p TCP --dport 5678 -j DNAT --to-destination [fd99::2]:31891 --wait]: exit status 2: ip6tables v1.8.4 (nf_tables): host/network `40.89.244.147' not found

The --to-destination is now correct ("[fd99::2]:31891", with brackets around the IP) but it's complaining about something else (which would need to be fixed in origin, not ovn-kubernetes).


So we can call this VERIFIED, but I'm going to close the 4.3 backport bug since it doesn't matter at this point. (The 4.4 backport already merged.)

Comment 9 Anurag saxena 2020-03-31 20:44:47 UTC
Thanks for reasoning and explanation, Dan. Make complete sense.

Comment 11 errata-xmlrpc 2020-07-13 17:18:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.