Bug 1477638

Summary: Using iptables.service and ip6tables.service may lead to no firewall configuration after booting
Product: Red Hat Enterprise Linux 7 Reporter: Robert Scheck <redhat-bugzilla>
Component: iptablesAssignee: Thomas Woerner <twoerner>
Status: CLOSED DUPLICATE QA Contact: qe-baseos-daemons
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.4CC: ajohn, ajz, anderson.gomes, contact, fabiano.martins, ffutigam, gkadam, iptables-maint-list, perobins, redhat-bugzilla, richard.cunningham, robert.scheck, twoerner, vjadhav
Target Milestone: rcKeywords: Security
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-09 12:26:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Robert Scheck 2017-08-02 14:06:55 UTC
Description of problem:
1. Static iptables configuration in /etc/sysconfig/iptables
2. systemctl enable iptables.service
3. static ip6tables configuration in /etc/sysconfig/ip6tables
4. systemctl enable ip6tables.service
5. reboot
6. iptables.service or ip6tables.service failed during boot, thus either
   no IPv4 or no IPv6 firewalling as result.

Example, first reboot:

--- snipp ---
Aug  2 13:11:27 tux systemd: Starting IPv4 firewall with iptables...
Aug  2 13:11:27 tux ip6tables.init: ip6tables: Applying firewall rules: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Aug  2 13:11:27 tux ip6tables.init: [FAILED]
Aug  2 13:11:27 tux kernel: nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Aug  2 13:11:27 tux iptables.init: iptables: Applying firewall rules: [  OK  ]
Aug  2 13:11:27 tux systemd: Started IPv4 firewall with iptables.
Aug  2 13:11:27 tux systemd: ip6tables.service: main process exited, code=exited, status=1/FAILURE
Aug  2 13:11:27 tux systemd: Failed to start IPv6 firewall with ip6tables.
Aug  2 13:11:27 tux systemd: Unit ip6tables.service entered failed state.
Aug  2 13:11:27 tux systemd: ip6tables.service failed.
--- snapp ---

Without any firewall (iptables, ip6tables, netfilter, etc.) related change,
after a subsequent reboot it looks like this:

--- snipp ---
Aug  2 14:33:12 tux kernel: nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Aug  2 14:33:12 tux systemd: Starting IPv4 firewall with iptables...
Aug  2 14:33:12 tux iptables.init: iptables: Applying firewall rules: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Aug  2 14:33:12 tux iptables.init: [FAILED]
Aug  2 14:33:12 tux ip6tables.init: ip6tables: Applying firewall rules: [  OK  ]
Aug  2 14:33:12 tux systemd: Started IPv6 firewall with ip6tables.
Aug  2 14:33:12 tux systemd: iptables.service: main process exited, code=exited, status=1/FAILURE
Aug  2 14:33:12 tux systemd: Failed to start IPv4 firewall with iptables.
Aug  2 14:33:12 tux systemd: Unit iptables.service entered failed state.
Aug  2 14:33:12 tux systemd: iptables.service failed.
--- snapp ---

Ouch! Such random failures may expose services to networks that should not
have access to them.

Putting the following in /etc/systemd/system/ip6tables.service.d/local.conf
does its job as a workaround at least:

--- snipp ---
[Unit]
After=iptables.service
--- snapp ---

Version-Release number of selected component (if applicable):
iptables-1.4.21-18.el7.x86_64
iptables-services-1.4.21-18.el7.x86_64

How reproducible:
Everytime, see above.

Actual results:
Using iptables.service and ip6tables.service may lead to no firewall 
configuration after booting.

Expected results:
No random IPv4/IPv6 firewall failures due to locking conditions in common
used components.

Additional info:
This should be considered as a security related bug/flaw, too.

Comment 2 Robert Scheck 2017-08-02 14:09:00 UTC
Cross-filed ticket 01903155 on the Red Hat customer portal.

Comment 3 Robert Scheck 2017-08-03 15:31:13 UTC
Akhil, I personally dislike the workaround of manually starting the
failed unit at Red Hat Knowledge Base (Solution) 3138851, wouldn't it
be better to suggest my workaround? It is IMHO at least reboot-safe.

Comment 4 AJ Zmudosky 2017-08-04 05:44:49 UTC
The underlying issue seems to be the backporting of "--wait" to iptables-restore in https://bugzilla.redhat.com/show_bug.cgi?id=1438597 (released in https://access.redhat.com/errata/RHEA-2017:2280) now have iptables-restore looking for an exclusive lock file and default behavior of immediately exiting if it can't obtain it.

There were no accompanying updates to the scripts in "iptables-services" to use the new "--wait" flag. (iptables-services-1.4.21-17.el7.x86_64 and iptables-services-1.4.21-18.el7.x86_64 have identical file contents).

This issue was discovered in our environment on a new set of system builds that were patched up to 7.4 and rebooted. I planned to deploy the same local drop-in to ip6tables.service as a workaround until this is addressed and ensure our systems retain their full firewall configurations upon reboot.

Comment 5 Akhil John 2017-08-07 21:13:09 UTC
Hi Robert Scheck,

I have updated the Red Hat Knowledge Base solution 3138851.

Thanks a lot.

Comment 6 Anderson 2017-08-07 22:01:47 UTC
I just logged into Bugzilla in order to report this issue.

In the production environment I manage, system firewall rules are pulled from a Spacewalk server and saved into /etc/sysconfig/iptables and /etc/sysconfig/ip6tables . Depending on processor scheduling and hardware conditions, either iptables or ip6tables may fail to start on boot time, leaving part of the network stack opened to internal attacks. The larger the firewall rules are, the bigger the probability of a startup problem is.

For example, ip6tables always fail to start on boot after running this command line:

# ( echo -e '*filter\n:INPUT ACCEPT [0:0]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]' ;  for a in `seq 1 3000000`; do echo '-A OUTPUT -j RETURN' ; done; echo COMMIT ) | tee /etc/sysconfig/iptables > /etc/sysconfig/ip6tables

------------------

Manually checking unit status is not feasible here, because of the huge number of RHEL / CentOS / Oracle Linux installations. Logging into each server in order to check service status would be time-consuming. I agree to the dependency ordering workaround:


# systemctl cat iptables.service

(...)

# /etc/systemd/system/iptables.service.d/ip6tables-conflict-solving.conf
[Unit]
Before=ip6tables.service

Comment 7 Thomas Woerner 2017-08-09 12:26:35 UTC

*** This bug has been marked as a duplicate of bug 1477413 ***

Comment 9 Robert Scheck 2018-10-30 18:35:03 UTC
Not sure why this is "needinfo" for me (without any question that I can see).

Comment 11 Robert Scheck 2018-10-31 07:35:25 UTC
I still can not see any question (I am not a Red Hat employee)...