Hide Forgot
Description of problem: As subject Version-Release number of selected component (if applicable): firewalld-0.8.2-2.el8.noarch nftables-0.9.3-16.el8.x86_64 python3-nftables-0.9.3-16.el8.x86_64 How reproducible: 100% Steps to Reproduce: 1. Update nftable and firewalld ➜ ~ dnf history info 192|grep -E '(nftable|firewall)' Upgrade firewalld-0.8.2-2.el8.noarch @beaker-BaseOS Upgraded firewalld-0.8.2-1.el8.noarch @@System Upgrade firewalld-filesystem-0.8.2-2.el8.noarch @beaker-BaseOS Upgraded firewalld-filesystem-0.8.2-1.el8.noarch @@System Upgrade nftables-1:0.9.3-16.el8.x86_64 @beaker-BaseOS Upgraded nftables-1:0.9.3-14.el8.x86_64 @@System Upgrade python3-firewall-0.8.2-2.el8.noarch @beaker-BaseOS Upgraded python3-firewall-0.8.2-1.el8.noarch @@System Upgrade python3-nftables-1:0.9.3-16.el8.x86_64 @beaker-BaseOS Upgraded python3-nftables-1:0.9.3-14.el8.x86_64 @@System 2. Reload by firewall-cmd ➜ ~ firewall-cmd --reload Error: COMMAND_FAILED: 'python-nftables' failed: JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"table": {"family": "inet", "name": "firewalld_policy_drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_input", "type": "filter", "hook": "input", "prio": 9, "policy": "drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_forward", "type": "filter", "hook": "forward", "prio": 9, "policy": "drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_output", "type": "filter", "hook": "output", "prio": 9, "policy": "drop"}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_input", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_forward", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_output", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}]} Actual results: As above Expected results: firewall reloaded Additional info: This bug will block virt network testing of libvirt See also: https://bugzilla.redhat.com/show_bug.cgi?id=1817205#c15 https://bugzilla.redhat.com/show_bug.cgi?id=1817205
Can you share the firewalld configuration and error/warnings from the log file?
Created attachment 1711794 [details] The conf files and logs conf: the firewalld conf from /etc/firewalld firewalld.log: the log of /var/log/firewalld when execute `firewall-cmd --reload`
(In reply to Han Han from comment #4) > Created attachment 1711794 [details] > The conf files and logs > > conf: the firewalld conf from /etc/firewalld > firewalld.log: the log of /var/log/firewalld when execute `firewall-cmd > --reload` Unfortunately there is no error message returned from libnftables. I took your zone configuration and it could not reproduce. I do not have a service definition for "glusterfs" and didn't find any packages that provide it. Can you attach it here?
Another thing to try.. set IndividualCalls=yes in /etc/firewalld/firewalld.conf. This will give a better indication of what rule fails to apply to nftables.
The log after set IndividualCalls=yes: 2020-08-19 22:37:20 WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now. 2020-08-19 22:37:20 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eno1"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]} 2020-08-19 22:37:20 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eno1"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]} 2020-08-19 22:37:20 ERROR: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eno1"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]} The firewalld works after gluster serivce removed from /etc/firewalld/zones/public.xml: ➜ ~ firewall-cmd --reload success And the gluster.xml will be provided by glusterfs-server-6.0-40.el8rhgs.x86_64.rpm So I think it is not a bug here. However, could firewalld provide more clear error msg when a missing service used in zones xml>?
The first error in the log lets us know: # systemctl stop firewalld # truncate -s 0 /var/log/firewalld # systemctl start firewalld # grep ERROR /var/log/firewalld 2020-08-20 08:26:45 ERROR: INVALID_SERVICE: glusterfs 2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory 2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory 2020-08-20 08:26:45 ERROR: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory At this point we're in a failed state because the configuration is invalid. # firewall-cmd --state failed After which a reload fails, but that's not surprising because we're already failed. # firewall-cmd --reload Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eth0"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]} check-config catches the issue though. # firewall-cmd --check-config Error: INVALID_SERVICE: 'public.xml': 'glusterfs' not among existing services In the past we've discussed implicitly running --check-config when the user does a --reload. This would verify the configuration before attempting to reload. I think this would be sufficient to catch this scenario. Do you agree?
(In reply to Eric Garver from comment #8) > The first error in the log lets us know: > > # systemctl stop firewalld > # truncate -s 0 /var/log/firewalld > # systemctl start firewalld > # grep ERROR /var/log/firewalld > 2020-08-20 08:26:45 ERROR: INVALID_SERVICE: glusterfs > 2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error: > Could not process rule: No such file or directory > 2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error: > Could not process rule: No such file or directory > 2020-08-20 08:26:45 ERROR: COMMAND_FAILED: 'python-nftables' failed: > internal:0:0-0: Error: Could not process rule: No such file or directory > > At this point we're in a failed state because the configuration is invalid. > > # firewall-cmd --state > failed > > After which a reload fails, but that's not surprising because we're already > failed. > > # firewall-cmd --reload > Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: > Could not process rule: No such file or directory > > > JSON blob: > {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": > {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", > "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", > "right": "eth0"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]} > > check-config catches the issue though. > > # firewall-cmd --check-config > Error: INVALID_SERVICE: 'public.xml': 'glusterfs' not among existing services > > > In the past we've discussed implicitly running --check-config when the user > does a --reload. This would verify the configuration before attempting to > reload. I think this would be sufficient to catch this scenario. > > Do you agree? Yes. I agree check config should be implicitly running before reload.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.