Description of problem: firewall-cmd --reload breaks connectivity and firewalld needs to be restarted in order to recover access. It somehow uses nft when reloading even though the backend being used is iptables. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Update to latest rawhide 2. send firewall-cmd --reload 3. Actual results: INPUT policy becomes DROP OUTPUT policy becomes DROP Expected results: Nothing should happen actually Additional info:
Created attachment 1528251 [details] firewalld logs
Created attachment 1528252 [details] firewalld configuration files
Created attachment 1528253 [details] script used to configure the local firewall.
When I reload, one of the first thing we notice is that the following: [root@zappa log]# iptables -nL | grep policy Chain INPUT (policy ACCEPT) Chain FORWARD (policy ACCEPT) Chain OUTPUT (policy ACCEPT) becomes: [root@zappa log]# iptables -nL | grep policy Chain INPUT (policy DROP) Chain FORWARD (policy DROP) Chain OUTPUT (policy DROP) and running "systemctl restart firewalld" restores all the rules and policy.
Created attachment 1528254 [details] firewalld in debug while reproducing the issue
This is when I restarted firewalld in debug: 2019-02-08 19:57:31 DEBUG1: start() and this is probably when the issue starts: 2019-02-08 19:58:29 DEBUG1: reload() 2019-02-08 19:58:29 DEBUG1: Setting policy to 'DROP'
Created attachment 1528256 [details] dnf.rpm.log ... the issue started today after updating packages I guess.
One last comment for today and is ... you can forget my initial comment about nft as I just noticed the firewalld logs were never rotated and this issue happend when we migrated to nft which I reverted by changing back the backend to iptables.
(In reply to David Hill from comment #5) > Created attachment 1528254 [details] > firewalld in debug while reproducing the issue From the logs: firewall.errors.FirewallError: COMMAND_FAILED: '/usr/sbin/ebtables-restore --noflush' failed: Bad table name 'nat'. So marking it a duplicate of bug 1672683. *** This bug has been marked as a duplicate of bug 1672683 ***