Description of problem: Create some pod/services, then try to start firewalld, the rules in iptable will flush. Version-Release number of selected component (if applicable): oc v3.1.1.2 kubernetes v1.1.0-origin-1107-g4c8e6f4 docker version Client: Version: 1.8.2-el7 API version: 1.20 Package Version: docker-1.8.2-10.el7.x86_64 Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 Server: Version: 1.8.2-el7 API version: 1.20 Package Version: Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 # rpm -qa | grep firewalld firewalld-0.3.9-14.el7.noarch How reproducible: Always Steps to Reproduce: 1. Create some pods/services 2. Check the iptable rules # iptables -t nat -nL 3. Start firewalld # systemctl start firewalld 4. Check the iptables rules again Actual results: step2: before start firewlld [root@openshift-v3 ~]# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.0.0/24 0.0.0.0/0 MASQUERADE all -- 10.1.0.0/16 !10.1.0.0/16 MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0 MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351 Chain DOCKER (2 references) target prot opt source destination Chain KUBE-NODEPORTS (1 references) target prot opt source destination Chain KUBE-SEP-DEXGZJ7MAWDXPSTU (1 references) target prot opt source destination MARK all -- 192.168.0.36 0.0.0.0/0 /* default/kubernetes:https */ MARK set 0x4d415351 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:192.168.0.36:8443 Chain KUBE-SEP-OEKAEUH6RPMRG3I2 (1 references) target prot opt source destination MARK all -- 192.168.0.36 0.0.0.0/0 /* default/kubernetes:dns-tcp */ MARK set 0x4d415351 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:dns-tcp */ tcp to:192.168.0.36:53 Chain KUBE-SEP-SMDUCIZM5B7W7SBB (1 references) target prot opt source destination MARK all -- 192.168.0.36 0.0.0.0/0 /* default/kubernetes:dns */ MARK set 0x4d415351 DNAT udp -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:dns */ udp to:192.168.0.36:53 Chain KUBE-SEP-TUKFXP2HATZDPGT4 (2 references) target prot opt source destination MARK all -- 10.1.1.4 0.0.0.0/0 /* default/docker-registry:5000-tcp */ MARK set 0x4d415351 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry:5000-tcp */ recent: SET name: KUBE-SEP-TUKFXP2HATZDPGT4 side: source mask: 255.255.255.255 tcp to:10.1.1.4:5000 Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-SVC-GQKZAHCS5DTMHUQ6 tcp -- 0.0.0.0/0 172.30.251.212 /* default/router:80-tcp cluster IP */ tcp dpt:80 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- 0.0.0.0/0 172.30.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443 KUBE-SVC-3VQ6B3MLH7E2SZT4 udp -- 0.0.0.0/0 172.30.0.1 /* default/kubernetes:dns cluster IP */ udp dpt:53 KUBE-SVC-BA6I5HTZKAAAJT56 tcp -- 0.0.0.0/0 172.30.0.1 /* default/kubernetes:dns-tcp cluster IP */ tcp dpt:53 KUBE-SVC-ECTPRXTXBM34L34Q tcp -- 0.0.0.0/0 172.30.105.20 /* default/docker-registry:5000-tcp cluster IP */ tcp dpt:5000 KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-3VQ6B3MLH7E2SZT4 (1 references) target prot opt source destination KUBE-SEP-SMDUCIZM5B7W7SBB all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:dns */ Chain KUBE-SVC-BA6I5HTZKAAAJT56 (1 references) target prot opt source destination KUBE-SEP-OEKAEUH6RPMRG3I2 all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:dns-tcp */ Chain KUBE-SVC-ECTPRXTXBM34L34Q (1 references) target prot opt source destination KUBE-SEP-TUKFXP2HATZDPGT4 all -- 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry:5000-tcp */ recent: CHECK seconds: 180 reap name: KUBE-SEP-TUKFXP2HATZDPGT4 side: source mask: 255.255.255.255 KUBE-SEP-TUKFXP2HATZDPGT4 all -- 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry:5000-tcp */ Chain KUBE-SVC-GQKZAHCS5DTMHUQ6 (1 references) target prot opt source destination Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references) target prot opt source destination KUBE-SEP-DEXGZJ7MAWDXPSTU all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ step4: after starting firewalld [root@openshift-v3 ~]# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination [root@openshift-v3 ~]# systemctl start firewalld [root@openshift-v3 ~]# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination PREROUTING_direct all -- 0.0.0.0/0 0.0.0.0/0 PREROUTING_ZONES_SOURCE all -- 0.0.0.0/0 0.0.0.0/0 PREROUTING_ZONES all -- 0.0.0.0/0 0.0.0.0/0 DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination OUTPUT_direct all -- 0.0.0.0/0 0.0.0.0/0 DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.0.0/24 0.0.0.0/0 POSTROUTING_direct all -- 0.0.0.0/0 0.0.0.0/0 POSTROUTING_ZONES_SOURCE all -- 0.0.0.0/0 0.0.0.0/0 POSTROUTING_ZONES all -- 0.0.0.0/0 0.0.0.0/0 Chain DOCKER (2 references) target prot opt source destination Chain KUBE-SERVICES (0 references) target prot opt source destination Chain OUTPUT_direct (1 references) target prot opt source destination Chain POSTROUTING_ZONES (1 references) target prot opt source destination POST_public all -- 0.0.0.0/0 0.0.0.0/0 [goto] POST_public all -- 0.0.0.0/0 0.0.0.0/0 [goto] Chain POSTROUTING_ZONES_SOURCE (1 references) target prot opt source destination Chain POSTROUTING_direct (1 references) target prot opt source destination Chain POST_public (2 references) target prot opt source destination POST_public_log all -- 0.0.0.0/0 0.0.0.0/0 POST_public_deny all -- 0.0.0.0/0 0.0.0.0/0 POST_public_allow all -- 0.0.0.0/0 0.0.0.0/0 Chain POST_public_allow (1 references) target prot opt source destination Chain POST_public_deny (1 references) target prot opt source destination Chain POST_public_log (1 references) target prot opt source destination Chain PREROUTING_ZONES (1 references) target prot opt source destination PRE_public all -- 0.0.0.0/0 0.0.0.0/0 [goto] PRE_public all -- 0.0.0.0/0 0.0.0.0/0 [goto] Chain PREROUTING_ZONES_SOURCE (1 references) target prot opt source destination Chain PREROUTING_direct (1 references) target prot opt source destination Chain PRE_public (2 references) target prot opt source destination PRE_public_log all -- 0.0.0.0/0 0.0.0.0/0 PRE_public_deny all -- 0.0.0.0/0 0.0.0.0/0 PRE_public_allow all -- 0.0.0.0/0 0.0.0.0/0 Chain PRE_public_allow (1 references) target prot opt source destination Chain PRE_public_deny (1 references) target prot opt source destination Chain PRE_public_log (1 references) target prot opt source destination Expected results: iptables rules won't be flushed Additional info:
I don't think this is really a bug, but I'll let Dan comment. I believe it is required that firewalld be running before docker/kubelet/kube-proxy, although you can restart firewalld after those are running...
Could you reproduce with the following actions? 1) set --loglevel=5 in /lib/system/system/atomic-openshift-node.service (or openshift-node.service, whichever one you use) 2) restart openshift 3) reproduce the issue, note the time when you restarted firewalld 4) journalctl -b -u atomic-openshift-node (or just openshift-node) What I'm looking for is to see if there is a "reloading iptables rules" message in the openshift logs around the time that you start/restart firewalld.
(In reply to Eric Paris from comment #1) > I don't think this is really a bug, but I'll let Dan comment. I believe it > is required that firewalld be running before docker/kubelet/kube-proxy, > although you can restart firewalld after those are running... No, it's supposed to work this way as well.
I can't reproduce this. It works fine for me; when I start firewalld, the rules are removed by firewalld, and then immediately recreated by openshift. (In reply to Yan Du from comment #0) > step4: after starting firewalld > > > [root@openshift-v3 ~]# iptables -t nat -nL > Chain PREROUTING (policy ACCEPT) > target prot opt source destination > > Chain INPUT (policy ACCEPT) > target prot opt source destination > > Chain OUTPUT (policy ACCEPT) > target prot opt source destination > > Chain POSTROUTING (policy ACCEPT) > target prot opt source destination > [root@openshift-v3 ~]# systemctl start firewalld > [root@openshift-v3 ~]# iptables -t nat -nL > Chain PREROUTING (policy ACCEPT) > target prot opt source destination > PREROUTING_direct all -- 0.0.0.0/0 0.0.0.0/0 > PREROUTING_ZONES_SOURCE all -- 0.0.0.0/0 0.0.0.0/0 > PREROUTING_ZONES all -- 0.0.0.0/0 0.0.0.0/0 Note that the iptables are *already* missing before you start firewalld here. What did you do between the original "iptables -t nat -nL" (which showed the openshift rules) and the second one, quoted above, which does not?
Actually it was not working last Friday, and I waited about 10 mintues, the rules was still not recovered. I tested it again on latest ose env today, oc v3.1.1.4 kubernetes v1.1.0-origin-1107-g4c8e6f4 But the issue could not be reproduced, iptales rules works normally now. Could you please move it to ON_QA and I will closed the bugs. Thanks