RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1477638 - Using iptables.service and ip6tables.service may lead to no firewall configuration after booting
Summary: Using iptables.service and ip6tables.service may lead to no firewall configur...
Keywords:
Status: CLOSED DUPLICATE of bug 1477413
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: iptables
Version: 7.4
Hardware: All
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Thomas Woerner
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-02 14:06 UTC by Robert Scheck
Modified: 2021-06-10 12:44 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-09 12:26:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3138851 0 None None None 2017-08-03 15:15:55 UTC

Description Robert Scheck 2017-08-02 14:06:55 UTC
Description of problem:
1. Static iptables configuration in /etc/sysconfig/iptables
2. systemctl enable iptables.service
3. static ip6tables configuration in /etc/sysconfig/ip6tables
4. systemctl enable ip6tables.service
5. reboot
6. iptables.service or ip6tables.service failed during boot, thus either
   no IPv4 or no IPv6 firewalling as result.

Example, first reboot:

--- snipp ---
Aug  2 13:11:27 tux systemd: Starting IPv4 firewall with iptables...
Aug  2 13:11:27 tux ip6tables.init: ip6tables: Applying firewall rules: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Aug  2 13:11:27 tux ip6tables.init: [FAILED]
Aug  2 13:11:27 tux kernel: nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Aug  2 13:11:27 tux iptables.init: iptables: Applying firewall rules: [  OK  ]
Aug  2 13:11:27 tux systemd: Started IPv4 firewall with iptables.
Aug  2 13:11:27 tux systemd: ip6tables.service: main process exited, code=exited, status=1/FAILURE
Aug  2 13:11:27 tux systemd: Failed to start IPv6 firewall with ip6tables.
Aug  2 13:11:27 tux systemd: Unit ip6tables.service entered failed state.
Aug  2 13:11:27 tux systemd: ip6tables.service failed.
--- snapp ---

Without any firewall (iptables, ip6tables, netfilter, etc.) related change,
after a subsequent reboot it looks like this:

--- snipp ---
Aug  2 14:33:12 tux kernel: nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Aug  2 14:33:12 tux systemd: Starting IPv4 firewall with iptables...
Aug  2 14:33:12 tux iptables.init: iptables: Applying firewall rules: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Aug  2 14:33:12 tux iptables.init: [FAILED]
Aug  2 14:33:12 tux ip6tables.init: ip6tables: Applying firewall rules: [  OK  ]
Aug  2 14:33:12 tux systemd: Started IPv6 firewall with ip6tables.
Aug  2 14:33:12 tux systemd: iptables.service: main process exited, code=exited, status=1/FAILURE
Aug  2 14:33:12 tux systemd: Failed to start IPv4 firewall with iptables.
Aug  2 14:33:12 tux systemd: Unit iptables.service entered failed state.
Aug  2 14:33:12 tux systemd: iptables.service failed.
--- snapp ---

Ouch! Such random failures may expose services to networks that should not
have access to them.

Putting the following in /etc/systemd/system/ip6tables.service.d/local.conf
does its job as a workaround at least:

--- snipp ---
[Unit]
After=iptables.service
--- snapp ---

Version-Release number of selected component (if applicable):
iptables-1.4.21-18.el7.x86_64
iptables-services-1.4.21-18.el7.x86_64

How reproducible:
Everytime, see above.

Actual results:
Using iptables.service and ip6tables.service may lead to no firewall 
configuration after booting.

Expected results:
No random IPv4/IPv6 firewall failures due to locking conditions in common
used components.

Additional info:
This should be considered as a security related bug/flaw, too.

Comment 2 Robert Scheck 2017-08-02 14:09:00 UTC
Cross-filed ticket 01903155 on the Red Hat customer portal.

Comment 3 Robert Scheck 2017-08-03 15:31:13 UTC
Akhil, I personally dislike the workaround of manually starting the
failed unit at Red Hat Knowledge Base (Solution) 3138851, wouldn't it
be better to suggest my workaround? It is IMHO at least reboot-safe.

Comment 4 AJ Zmudosky 2017-08-04 05:44:49 UTC
The underlying issue seems to be the backporting of "--wait" to iptables-restore in https://bugzilla.redhat.com/show_bug.cgi?id=1438597 (released in https://access.redhat.com/errata/RHEA-2017:2280) now have iptables-restore looking for an exclusive lock file and default behavior of immediately exiting if it can't obtain it.

There were no accompanying updates to the scripts in "iptables-services" to use the new "--wait" flag. (iptables-services-1.4.21-17.el7.x86_64 and iptables-services-1.4.21-18.el7.x86_64 have identical file contents).

This issue was discovered in our environment on a new set of system builds that were patched up to 7.4 and rebooted. I planned to deploy the same local drop-in to ip6tables.service as a workaround until this is addressed and ensure our systems retain their full firewall configurations upon reboot.

Comment 5 Akhil John 2017-08-07 21:13:09 UTC
Hi Robert Scheck,

I have updated the Red Hat Knowledge Base solution 3138851.

Thanks a lot.

Comment 6 Anderson 2017-08-07 22:01:47 UTC
I just logged into Bugzilla in order to report this issue.

In the production environment I manage, system firewall rules are pulled from a Spacewalk server and saved into /etc/sysconfig/iptables and /etc/sysconfig/ip6tables . Depending on processor scheduling and hardware conditions, either iptables or ip6tables may fail to start on boot time, leaving part of the network stack opened to internal attacks. The larger the firewall rules are, the bigger the probability of a startup problem is.

For example, ip6tables always fail to start on boot after running this command line:

# ( echo -e '*filter\n:INPUT ACCEPT [0:0]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]' ;  for a in `seq 1 3000000`; do echo '-A OUTPUT -j RETURN' ; done; echo COMMIT ) | tee /etc/sysconfig/iptables > /etc/sysconfig/ip6tables

------------------

Manually checking unit status is not feasible here, because of the huge number of RHEL / CentOS / Oracle Linux installations. Logging into each server in order to check service status would be time-consuming. I agree to the dependency ordering workaround:


# systemctl cat iptables.service

(...)

# /etc/systemd/system/iptables.service.d/ip6tables-conflict-solving.conf
[Unit]
Before=ip6tables.service

Comment 7 Thomas Woerner 2017-08-09 12:26:35 UTC

*** This bug has been marked as a duplicate of bug 1477413 ***

Comment 9 Robert Scheck 2018-10-30 18:35:03 UTC
Not sure why this is "needinfo" for me (without any question that I can see).

Comment 11 Robert Scheck 2018-10-31 07:35:25 UTC
I still can not see any question (I am not a Red Hat employee)...


Note You need to log in before you can comment on or make changes to this bug.