Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Created attachment 1038540[details]
vlan interface creation script
Description of problem:
ifup becomes slow when large numbers of VLANs are created
Version-Release number of selected component (if applicable):
1:NetworkManager-1.0.0-14.git20150121.b4ea599c.el7.x86_64
kernel 3.10.0-229.el7.x86_64
How reproducible:
100%
Steps to Reproduce:
1. create scripts for 254 VLANs
2. loop over the interface names doing ifup
3.
Actual results:
In a VM on a laptop, the last "ifup" takes multiple seconds to complete.
The sequence as a whole appears to show quadratic behaviour
Expected results:
Better performance.
Additional info:
Strace of the "ifup" shows 3 slow "nmcli" operations and one slow "grep" -
each on order of 3 seconds. The first "nmcli" is a simple status inquiry:
"nmcli -t --fields running general status"... and it does over 20,000 write
syscalls (and equivalently large numbers of other syscalls). This is repeatable
manually (with the VLANs inplace). On a fresh boot without the VLANs, only 168 write syscalls are done.
I can reproduce the slowness by creating many devices (using the script from Description) and then do any operation with nmcli.
For example:
$ nmcli dev | wc -l
263
$ time nmcli general
STATE CONNECTIVITY WIFI-HW WIFI WWAN-HW WWAN
connected full enabled enabled enabled enabled
real 0m4.912s
user 0m2.475s
sys 0m0.437s
The problem is not syscalls per se, but rather in intensive usage of glib in libnm library. The nm_client_new() function itself takes about 50% of the instructions.
I profiled 'nmcli general' command with valgrind's callgrind. The log is attached in the next comment. Data can be displayed with:
$ callgrind_annotate callgrind.out.23830
or better
$ kcachegrind callgrind.out.23830
which is very nice GUI tool to show the data in nice views with callgraphs, maps, etc.
Unfortunately, I don't see any simple culprit or a low hanging fruit there. It is just obvious that the most intensive functions are memory management and various glib functions, because libnm calls them too many times (1 - 3 million), which seems wrong.
Useful link:
http://c.learncodethehardway.org/book/ex41.html
Created attachment 1039130[details]
Callgrind output for 'nmcli general' for NM 1.0.2
Callgrind output generated by:
valgrind --tool=callgrind nmcli general
on Fedora 22 with NetworkManager-1.0.2-1.fc22.x86_64
Dislpay data with:
kcachegrind callgrind.out.23830
Looks like dbus operations done by g_initable_init could be worth a look
(indeed, is that table needed for a plain status equiry? Not that other nmcli uses shouldn't be faster too, but...).
as libnm currently is, it fetches ~everything~ on initialization. There might be some places to optimize the fetching. But in the end loading everything will take some time on larger systems.
We should investigate fetch on-demand for libnm.
Lubomir suggested that porting libnm to use the GDBus ObjectManager interfaces to talk to NetworkManager (which NM 1.2 already implements service-side) is a possible fix here. We want to do that anyway to work around issues with D-Bus Policy on pending reply maximums.
All currently planned work to improve performance is already in upstream master, and hence part of upcoming rhel-7.4.
According to our tests, it significantly improves performance of nmcli/libnm.
I am marking this bug as fixed, although in the future we should find ways to improve performance further.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2017:2299
Created attachment 1038540 [details] vlan interface creation script Description of problem: ifup becomes slow when large numbers of VLANs are created Version-Release number of selected component (if applicable): 1:NetworkManager-1.0.0-14.git20150121.b4ea599c.el7.x86_64 kernel 3.10.0-229.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. create scripts for 254 VLANs 2. loop over the interface names doing ifup 3. Actual results: In a VM on a laptop, the last "ifup" takes multiple seconds to complete. The sequence as a whole appears to show quadratic behaviour Expected results: Better performance. Additional info: Strace of the "ifup" shows 3 slow "nmcli" operations and one slow "grep" - each on order of 3 seconds. The first "nmcli" is a simple status inquiry: "nmcli -t --fields running general status"... and it does over 20,000 write syscalls (and equivalently large numbers of other syscalls). This is repeatable manually (with the VLANs inplace). On a fresh boot without the VLANs, only 168 write syscalls are done.