RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1869451 - firewalld should implicitly run --check-config on --reload
Summary: firewalld should implicitly run --check-config on --reload
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: firewalld
Version: 8.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Eric Garver
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-18 03:05 UTC by Han Han
Modified: 2022-02-18 07:27 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-02-18 07:27:17 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The conf files and logs (3.21 KB, application/gzip)
2020-08-19 03:28 UTC, Han Han
no flags Details

Description Han Han 2020-08-18 03:05:33 UTC
Description of problem:
As subject

Version-Release number of selected component (if applicable):
firewalld-0.8.2-2.el8.noarch
nftables-0.9.3-16.el8.x86_64
python3-nftables-0.9.3-16.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Update nftable and firewalld
➜  ~ dnf history info 192|grep -E '(nftable|firewall)'
    Upgrade  firewalld-0.8.2-2.el8.noarch                                       @beaker-BaseOS
    Upgraded firewalld-0.8.2-1.el8.noarch                                       @@System
    Upgrade  firewalld-filesystem-0.8.2-2.el8.noarch                            @beaker-BaseOS
    Upgraded firewalld-filesystem-0.8.2-1.el8.noarch                            @@System
    Upgrade  nftables-1:0.9.3-16.el8.x86_64                                     @beaker-BaseOS
    Upgraded nftables-1:0.9.3-14.el8.x86_64                                     @@System
    Upgrade  python3-firewall-0.8.2-2.el8.noarch                                @beaker-BaseOS
    Upgraded python3-firewall-0.8.2-1.el8.noarch                                @@System
    Upgrade  python3-nftables-1:0.9.3-16.el8.x86_64                             @beaker-BaseOS
    Upgraded python3-nftables-1:0.9.3-14.el8.x86_64                             @@System

2. Reload by firewall-cmd
➜  ~ firewall-cmd --reload                            
Error: COMMAND_FAILED: 'python-nftables' failed: 
JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"table": {"family": "inet", "name": "firewalld_policy_drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_input", "type": "filter", "hook": "input", "prio": 9, "policy": "drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_forward", "type": "filter", "hook": "forward", "prio": 9, "policy": "drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_output", "type": "filter", "hook": "output", "prio": 9, "policy": "drop"}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_input", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_forward", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_output", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}]}


Actual results:
As above

Expected results:
firewall reloaded

Additional info:
This bug will block virt network testing of libvirt

See also:
https://bugzilla.redhat.com/show_bug.cgi?id=1817205#c15
https://bugzilla.redhat.com/show_bug.cgi?id=1817205

Comment 3 Eric Garver 2020-08-18 12:04:43 UTC
Can you share the firewalld configuration and error/warnings from the log file?

Comment 4 Han Han 2020-08-19 03:28:42 UTC
Created attachment 1711794 [details]
The conf files and logs

conf: the firewalld conf from /etc/firewalld
firewalld.log: the log of /var/log/firewalld when execute `firewall-cmd --reload`

Comment 5 Eric Garver 2020-08-19 12:51:15 UTC
(In reply to Han Han from comment #4)
> Created attachment 1711794 [details]
> The conf files and logs
> 
> conf: the firewalld conf from /etc/firewalld
> firewalld.log: the log of /var/log/firewalld when execute `firewall-cmd
> --reload`

Unfortunately there is no error message returned from libnftables.

I took your zone configuration and it could not reproduce. I do not have a service definition for "glusterfs" and didn't find any packages that provide it. Can you attach it here?

Comment 6 Eric Garver 2020-08-19 12:52:00 UTC
Another thing to try.. set IndividualCalls=yes in /etc/firewalld/firewalld.conf. This will give a better indication of what rule fails to apply to nftables.

Comment 7 Han Han 2020-08-20 03:26:48 UTC
The log after set IndividualCalls=yes:

2020-08-19 22:37:20 WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now.                                                   
2020-08-19 22:37:20 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory                                                                                                               


JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eno1"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]}
2020-08-19 22:37:20 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory                                                                                                               


JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eno1"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]}
2020-08-19 22:37:20 ERROR: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory                                                                                               


JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eno1"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]}


The firewalld works after gluster serivce removed from /etc/firewalld/zones/public.xml:
➜  ~ firewall-cmd --reload
success


And the gluster.xml will be provided by glusterfs-server-6.0-40.el8rhgs.x86_64.rpm

So I think it is not a bug here.

However, could firewalld provide more clear error msg when a missing service used in zones xml>?

Comment 8 Eric Garver 2020-08-20 12:31:55 UTC
The first error in the log lets us know:

# systemctl stop firewalld
# truncate -s 0 /var/log/firewalld 
# systemctl start firewalld
# grep ERROR /var/log/firewalld
2020-08-20 08:26:45 ERROR: INVALID_SERVICE: glusterfs
2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory
2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory
2020-08-20 08:26:45 ERROR: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory

At this point we're in a failed state because the configuration is invalid.

# firewall-cmd --state
failed

After which a reload fails, but that's not surprising because we're already failed.

# firewall-cmd --reload
Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: No such file or directory


JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule": {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES", "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==", "right": "eth0"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]}

check-config catches the issue though.

# firewall-cmd --check-config
Error: INVALID_SERVICE: 'public.xml': 'glusterfs' not among existing services


In the past we've discussed implicitly running --check-config when the user does a --reload. This would verify the configuration before attempting to reload. I think this would be sufficient to catch this scenario.

Do you agree?

Comment 10 Han Han 2020-08-21 09:13:37 UTC
(In reply to Eric Garver from comment #8)
> The first error in the log lets us know:
> 
> # systemctl stop firewalld
> # truncate -s 0 /var/log/firewalld 
> # systemctl start firewalld
> # grep ERROR /var/log/firewalld
> 2020-08-20 08:26:45 ERROR: INVALID_SERVICE: glusterfs
> 2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error:
> Could not process rule: No such file or directory
> 2020-08-20 08:26:45 ERROR: 'python-nftables' failed: internal:0:0-0: Error:
> Could not process rule: No such file or directory
> 2020-08-20 08:26:45 ERROR: COMMAND_FAILED: 'python-nftables' failed:
> internal:0:0-0: Error: Could not process rule: No such file or directory
> 
> At this point we're in a failed state because the configuration is invalid.
> 
> # firewall-cmd --state
> failed
> 
> After which a reload fails, but that's not surprising because we're already
> failed.
> 
> # firewall-cmd --reload
> Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error:
> Could not process rule: No such file or directory
> 
> 
> JSON blob:
> {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"insert": {"rule":
> {"family": "inet", "table": "firewalld", "chain": "raw_PREROUTING_ZONES",
> "expr": [{"match": {"left": {"meta": {"key": "iifname"}}, "op": "==",
> "right": "eth0"}}, {"goto": {"target": "raw_PRE_public"}}]}}}]}
> 
> check-config catches the issue though.
> 
> # firewall-cmd --check-config
> Error: INVALID_SERVICE: 'public.xml': 'glusterfs' not among existing services
> 
> 
> In the past we've discussed implicitly running --check-config when the user
> does a --reload. This would verify the configuration before attempting to
> reload. I think this would be sufficient to catch this scenario.
> 
> Do you agree?

Yes. I agree check config should be implicitly running before reload.

Comment 14 RHEL Program Management 2022-02-18 07:27:17 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.