RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1768956 - Include info about flow_dissector eBPF programs (per net name space)
Summary: Include info about flow_dissector eBPF programs (per net name space)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: sos
Version: 8.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: Pavel Moravec
QA Contact: Miroslav Hradílek
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-05 16:30 UTC by Jiri Benc
Modified: 2023-02-12 22:23 UTC (History)
5 users (show)

Fixed In Version: sos-3.8-2.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 17:01:57 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github sosreport sos pull 1874 0 'None' closed [kernel,networking] collect bpftool net list for each namespace 2020-08-05 09:33:48 UTC
Red Hat Issue Tracker RHELPLAN-31231 0 None None None 2023-02-12 22:23:07 UTC
Red Hat Product Errata RHEA-2020:1900 0 None None None 2020-04-28 17:02:07 UTC

Description Jiri Benc 2019-11-05 16:30:53 UTC
With expansion of eBPF functionality, the bpf programs can influence more and more the behavior of the system. sosreport already calls bpftool. But the bpftool tool is getting new functionality, too, which should be utilized by sosreport to provide more info to the support engineers.

In particular, the ability to load flow_dissector bpf programs was added recently. Those programs can mess up packet parsing in the kernel, causing packet drops and other undesired behavior.

The bpftool has a new parameter, 'net' (i.e., called as 'bpftool net'). It provides information about bpf programs attached to flow dissector hook (and xdp and tc, too, but those are not important, we have that information captured elsewhere already).

Please run 'bpftool net' in each net name space.

See the upstream kernel patch for some more details:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7f0c57fec80f198ae9fcd06e5bbca13196815a4b

Comment 1 Pavel Moravec 2019-12-03 09:14:23 UTC
(In reply to Jiri Benc from comment #0)
> Please run 'bpftool net' in each net name space.

Three questions:

Q1) So a bash code for this would be:

for space in $(ip netns | awk '{ print $1 }'); do     # 'awk' just to trim namespace only
    ip netns exec $space bpftool net                  # call and collect this cmd output
done

?

Q2) Thinking about possible implementation (if this shall go to kernel or networking plugin), can we rely on "ip netns" returning the same list like "ls /var/run/netns" ?

Q3) And in general: does bpftool belong rather to kernel or networking? I.e. where sosreport users would rather search for the command outputs - in kernel plugin or networking plugin? Currently we call it from kernel plugin but "per networking namespace" sounds more like networking - but then we would call the same tool once from kernel and once from networking? Or to move all calls of the tool to networking plugin?

Comment 2 Jiri Benc 2019-12-03 09:40:13 UTC
Ad Q3: with 'bpftool net' added, the command is now a weird mix of kernel general and networking. I would say that the current data captured by bpftool belong to the kernel (where they currently are) and 'bpftool net' belongs to networking. Is that okay?

Ad Q1 and Q2: there are already several commands run per netns in plugins/networking.py:

            for namespace in out_ns:
                ns_cmd_prefix = cmd_prefix + namespace + " "
                self.add_cmd_output([
                    ns_cmd_prefix + "ip address show",
                    ns_cmd_prefix + "ip route show table all",
                    ns_cmd_prefix + "iptables-save",
                    ns_cmd_prefix + "netstat %s -neopa" % self.ns_wide,
                    ns_cmd_prefix + "netstat -s",
                    ns_cmd_prefix + "netstat %s -agn" % self.ns_wide
                ])

I guess 'bpftool net' can be just added there?

Comment 3 Jiri Benc 2019-12-03 09:43:41 UTC
(In reply to Jiri Benc from comment #2)
> I guess 'bpftool net' can be just added there?

And to the non-netns section, too, to cover the root net name space (and the cases where name spaces are disabled).

Comment 4 Pavel Moravec 2019-12-03 21:06:41 UTC
Thanks for prompt feedback, upstream PR pending:

https://github.com/sosreport/sos/pull/1874

Comment 5 Pavel Moravec 2019-12-16 15:00:49 UTC
Hi,
would you mind to verify this BZ once a candidate build is available (probably still in 8.2 timeframe)?

Comment 6 Jiri Benc 2019-12-16 15:06:21 UTC
(In reply to Pavel Moravec from comment #5)
> would you mind to verify this BZ once a candidate build is available
> (probably still in 8.2 timeframe)?

No problem, just point me to the build you want me to test.

Comment 10 Pavel Moravec 2020-01-13 14:54:06 UTC
(In reply to Jiri Benc from comment #6)
> (In reply to Pavel Moravec from comment #5)
> > would you mind to verify this BZ once a candidate build is available
> > (probably still in 8.2 timeframe)?
> 
> No problem, just point me to the build you want me to test.

Ahoj,
here you are - thanks in advance for testing!


A yum repository for the build of sos-3.8-2.el8 (task 25733431) is available at:

http://brew-task-repos.usersys.redhat.com/repos/official/sos/3.8/2.el8/

You can install the rpms locally by putting this .repo file in your /etc/yum.repos.d/ directory:

http://brew-task-repos.usersys.redhat.com/repos/official/sos/3.8/2.el8/sos-3.8-2.el8.repo

RPMs and build logs can be found in the following locations:
http://brew-task-repos.usersys.redhat.com/repos/official/sos/3.8/2.el8/noarch/

The full list of available rpms is:
http://brew-task-repos.usersys.redhat.com/repos/official/sos/3.8/2.el8/noarch/sos-3.8-2.el8.src.rpm
http://brew-task-repos.usersys.redhat.com/repos/official/sos/3.8/2.el8/noarch/sos-3.8-2.el8.noarch.rpm
http://brew-task-repos.usersys.redhat.com/repos/official/sos/3.8/2.el8/noarch/sos-audit-3.8-2.el8.noarch.rpm

Build output will be available for the next 21 days.

Comment 11 Jiri Benc 2020-01-16 19:51:29 UTC
See bug 1721779 comment 17 for the test results. Looks okay.

Comment 14 errata-xmlrpc 2020-04-28 17:01:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1900


Note You need to log in before you can comment on or make changes to this bug.