Bug 1040824 - openshift-iptables-port-proxy service will return failure when user add iptable rules into NAT tables
Summary: openshift-iptables-port-proxy service will return failure when user add iptab...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 2.0.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Brenton Leanhardt
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-12 07:55 UTC by Johnny Liu
Modified: 2017-03-08 17:36 UTC (History)
7 users (show)

Fixed In Version: rubygem-openshift-origin-node-1.17.5.8-1 rubygem-openshift-origin-common-1.17.2.7-1
Doc Type: Bug Fix
Doc Text:
The openshift-iptables-port-proxy service compared the count of existing NAT rules against the number asserted by OpenShift Enterprise. As a result, adding any custom NAT rules to a node host caused the openshift-iptables-port-proxy service to incorrectly report a problem with the NAT table. This bug fix updates the NAT table comparison to verify that the count of rules in the NAT table is equal to or greater than the expected number. Additional NAT rules can now be specified without any error messages from the openshift-iptables-port-proxy service.
Clone Of:
Environment:
Last Closed: 2014-02-25 15:42:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0209 0 normal SHIPPED_LIVE Red Hat OpenShift Enterprise 2.0.3 bugfix and enhancement update 2014-02-25 20:40:32 UTC

Description Johnny Liu 2013-12-12 07:55:24 UTC
Description of problem:
Review code of status function in oo-admin-ctl-iptables-port-proxy:
    status)
        if [ ! -f $RULES_FILE ]; then
            echo "ERROR: $RULES_FILE does not exist." 1>&2
            exit 1
        fi

        if [ ! -f $NAT_FILE ]; then
            echo "ERROR: $NAT_FILE does not exist." 1>&2
            exit 1
        fi

        RULES_ASSERTED=`grep ACCEPT $RULES_FILE | wc -l`
        RULES=`iptables -L rhc-app-comm | grep ACCEPT | wc -l`
        if [ ! $RULES_ASSERTED -eq $RULES ]; then
            echo "ERROR: A difference has been detected between state of $RULES_FILE and the rhc-app-comm iptables chain." 1>&2
            exit 1
        fi

        NAT_ASSERTED=`grep DNAT $NAT_FILE | wc -l`
        NAT=`iptables -t nat -L | grep DNAT | wc -l`
        if [ ! $NAT_ASSERTED -eq $NAT ]; then
            echo "ERROR: A difference has been detected between state of $NAT_FILE and the NAT table." 1>&2
            exit 1
        fi

        echo "The OpenShift iptables port proxy is enabled."
        exit 0
        ;;


Found that it is counting iptable rules number for status function, for filter table, openshift add a customized chain (rhc-app-comm) to do that, that is a good idea. But for NAT table, there is no customized chain for opoenshift, that means, if user add his own iptables rules in NAT table, that would cause status failure.

So suggest openshift should also add a dedicated chain in NAT table to handle openshift iptable rules.

And once the above suggestion is implemented, also need do some minor change to start function.
start() {
    ROUTE_LOCALNET=`sysctl -n net.ipv4.conf.all.route_localnet`
    if [ $ROUTE_LOCALNET -eq "0" ]; then
        echo "WARNING: net.ipv4.conf.all.route_localnet must be enabled." 1>&2
        sysctl -w net.ipv4.conf.all.route_localnet=1
        echo "WARNING: It has been temporarily enabled.  Please ensure this setting is persisted in /etc/sysctl.conf."
    fi

    if [ -f $RULES_FILE ]; then
      { echo -e "*filter\n-F rhc-app-comm"; cat $RULES_FILE; echo "COMMIT"; } | iptables-restore -n
    fi

    if [ -f $NAT_FILE ]; then
      { echo "*nat"; cat $NAT_FILE; echo "COMMIT"; } | iptables-restore --table=nat
    fi
}

Make sure before applying openshift iptable rules into NAT table, should flush the existing rules just like what did to filiter table.


Version-Release number of selected component (if applicable):
rubygem-openshift-origin-node-1.17.5-2.git.22.33efd49.el6op.noarch.rpm

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 7 Peter Ruan 2014-01-31 08:21:54 UTC
verified with puddle-2014-01-30 & doing the following


# Launch another devenv that can reach the one where you are testing.  The reason it has to be in the AWS is because of our how default security group is setup.  To reach these high numbered ports the call must be made from inside the EC2.
#You should be able to reach the web app from this new machine:
curl -k -H "Host: mynodejsapp1-demo.dev.rhcloud.com" http://$AWS_INTERNAL_DNS:35532
# Delete the rhc-app-comm INPUT rule (on the first devenv)
iptables -D INPUT -j rhc-app-comm
# This should no longer work from the second machine:
curl -k -H "Host: mynodejsapp1-demo.dev.rhcloud.com" http://$AWS_INTERNAL_DNS:35532
# This should report the missing INPUT rule
service openshift-iptables-port-proxy status
# This should fix it and the curl command should now work
service openshift-iptables-port-proxy start (it will say the chain already exist.  This is OK)
curl -k -H "Host: mynodejsapp1-demo.dev.rhcloud.com" http://$AWS_INTERNAL_DNS:35532

Comment 8 Miciah Dashiel Butler Masters 2014-01-31 13:36:49 UTC
Should the verification steps include manipulating the NAT table, or is it enough just to verify that the change didn't break the logic for the filter table?

Comment 9 Brenton Leanhardt 2014-01-31 14:47:23 UTC
To Miciah's point the way to test this would be to:

1) create a scaled app
2) On the node first verify the nat rules exist: iptables -t nat -L
3) Drop the rules:

iptables -t nat -F PREROUTING
iptables -t nat -F OUTPUT

'service openshift-iptables-port-proxy status' should report a problem

4) service openshift-iptables-port-proxy start (put everything back)

5) add a manual nat rule.  You can look in /etc/openshift/iptables.nat.rules and just copy one of those and modify the destination.  Here's an example:

iptables -t nat -A PREROUTING -d 172.16.4.103/32 -m tcp -p tcp --dport 38031 -j DNAT --to-destination 127.1.111.1:8080

'service openshift-iptables-port-proxy status' should say everything is still OK.

Sorry for not making that more clear.  I know these changes aren't obvious to test.

Comment 10 Anping Li 2014-02-07 04:57:16 UTC
verified with puddle-2014-02-06 & doing the following
1) create a scaled app sperl510 and scale up this app in multiple node. 

2)IPs on node1:
OPENSHIFT_HAPROXY_STATUS_IP=127.11.100.3
OPENSHIFT_HAPROXY_IP=127.11.100.2
OPENSHIFT_PERL_IP=127.11.100.1
OPENSHIFT_PERL_PORT=8080

The nat rules exsist.
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:62191 to:127.11.100.1:8080 

IPs on node2
OPENSHIFT_PERL_IP=127.3.236.129

The nat rules exsit
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:62192 to:127.11.100.2:8080 

3) drop the ruless:
[root@nd216 ~]# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination 

[root@nd217 ~]# service openshift-iptables-port-proxy status
ERROR: A difference has been detected between state of /etc/openshift/iptables.nat.rules and the NAT table.
4) start proxy and check rules are back.
[root@nd216 ~]# service openshift-iptables-port-proxy start
[root@nd216 ~]# service openshift-iptables-port-proxy status
The OpenShift iptables port proxy is enabled.
[root@nd216 ~]# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:59961 to:127.10.133.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:59962 to:127.10.133.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:49121 to:127.6.73.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:49122 to:127.6.73.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:53511 to:127.8.0.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:53512 to:127.8.0.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:36791 to:127.13.48.129:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:36792 to:127.13.48.130:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:61486 to:127.11.29.129:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:41671 to:127.3.96.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:57946 to:127.9.187.129:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:41672 to:127.3.96.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:62191 to:127.11.100.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:62192 to:127.11.100.2:8080 

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:59961 to:127.10.133.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:59962 to:127.10.133.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:49121 to:127.6.73.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:49122 to:127.6.73.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:53511 to:127.8.0.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:53512 to:127.8.0.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:36791 to:127.13.48.129:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:36792 to:127.13.48.130:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:61486 to:127.11.29.129:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:41671 to:127.3.96.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:57946 to:127.9.187.129:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:41672 to:127.3.96.2:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:62191 to:127.11.100.1:8080 
DNAT       tcp  --  anywhere             qe-anli-nd1.novalocal tcp dpt:62192 to:127.11.100.2:8080 

5) Add one line and check the proxy status.
[root@nd216 sysconfig]# cat /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:rhc-app-comm - [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j rhc-app-comm
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8000 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A PREROUTING -d 192.168.55.31/32 -m tcp -p tcp --dport 62192 -j DNAT --to-destination 127.11.100.2:8080
COMMIT
[root@nd216 sysconfig]# service openshift-iptables-port-proxy status
The OpenShift iptables port proxy is enabled.

Comment 12 errata-xmlrpc 2014-02-25 15:42:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0209.html


Note You need to log in before you can comment on or make changes to this bug.