Description of problem:
I use a very basic firewall applied via the systemd services iptables & ip6tables.
The latest version randomly fails on start with an x-locking issue and suggest the -w option in the failure message.
I fixed this by adding the '-w 1' option to all calls to iptables-restore and ip6tables-restore in /usr/libexec/iptables/*
This seems to reliably fix the problem.
Note: For some reason, just specifying -w complains about a non-numeric option. Maybe something is interacting?
Version-Release number of selected component (if applicable):
Added possibly related backport issue.
I have the same problem: after iptables-services-1.4.21-18.el7.x86_64 / iptables-1.4.21-18.el7.x86_64 update:
On server boot either iptables.service or ip6tables.service fails to start with:
iptables.init: iptables: Applying firewall rules: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
and add --wait <num> to /usr/libexec/iptables/* seems to fix this.
I can confirm this only affects systems where firewalld is disabled and BOTH iptables AND ip6tables are enabled. If, for example, only iptables (IPv4) service is enabled, the service starts fine upon reboot.
Created attachment 1311172 [details]
Service wait patch
Proposed fix to fix the service wait issue with the restore wait patches applied.
*** Bug 1477638 has been marked as a duplicate of this bug. ***
With the update to iptables, I wonder if it makes sense to include -W 100 as well.
The man page shows:
-W, --wait-interval microseconds
Interval to wait per each iteration. When running latency sensitive applications, waiting for the xtables lock for extended durations may not be acceptable. This option will make each iteration take the amount of
time specified. The default interval is 1 second. This option only works with -w.
The default wait period will be 1 second otherwise.
(In reply to Thomas Woerner from comment #6)
> Created attachment 1311172 [details]
> Service wait patch
> Proposed fix to fix the service wait issue with the restore wait patches
Doesn't ip6tables.init need same --wait patch ?
(In reply to Jarno Huuskonen from comment #10)
> (In reply to Thomas Woerner from comment #6)
> > Created attachment 1311172 [details]
> > Service wait patch
> > Proposed fix to fix the service wait issue with the restore wait patches
> > applied.
> Doesn't ip6tables.init need same --wait patch ?
This is a build environment patch. ip6tables.init script is generated out of the iptables.init script.
Created attachment 1311235 [details]
Service wait patch
Enhanced patch version using iptables-config settings
Created attachment 1311237 [details]
Enhanced documentation for IPTABLES_RESTORE_WAIT_INTERVAL
Thumbs up from me on patch v3.
Having the ability to tweak down the retry is fantastic - otherwise large firewall rulesets may take quite some time to apply.
Created attachment 1311251 [details]
Using microseconds properly for the --wait-interval option.
I can not see how attachment #1311251 [details] would actually do anything until
it is explicitly configured. Shouldn't there be any proper default?
(In reply to Robert Scheck from comment #20)
> I can not see how attachment #1311251 [details] would actually do anything
> until it is explicitly configured. Shouldn't there be any proper default?
Sorry, I overlooked the second chunk of the patch.
Just wanted to give this a bit of a nudge....
There's a growing number of RHEL 7.4 systems on the internet where the firewall is currently failing to load....
Just wanted to give this yet another nudge...
Because of reasons mentioned by Steven Haigh
This issue is being addressed in 7.4.2 batch zstream, the reference is bug 1481207
Is there any indication of a timeframe for that release?
I am afraid I cannot give you an exact date, but as said on https://access.redhat.com/solutions/401413 the asynchronous updates are released in weeks timeframe. This is planned for the next batch as the last one was missed due to incomplete fixes.
If you need better information, please contact our support where you can request the package as a hotfix.
This is happening to me (I think it's this bug) on Fedora 26. Are the fixes going into Fedora also?
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.