RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1874001 - teamd always using 100% cpu usage
Summary: teamd always using 100% cpu usage
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: libteam
Version: 8.3
Hardware: x86_64
OS: Linux
urgent
medium
Target Milestone: rc
: 8.3
Assignee: Xin Long
QA Contact: LiLiang
URL:
Whiteboard:
Depends On: 1873128
Blocks: 1842946 1894546
TreeView+ depends on / blocked
 
Reported: 2020-08-31 09:44 UTC by Vladimir Benes
Modified: 2020-11-04 13:39 UTC (History)
8 users (show)

Fixed In Version: libteam-1.31-2.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1873128
: 1894546 (view as bug list)
Environment:
Last Closed: 2020-11-04 01:53:44 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4512 0 None None None 2020-11-04 01:53:58 UTC

Description Vladimir Benes 2020-08-31 09:44:39 UTC
+++ This bug was initially created as a clone of Bug #1873128 +++

Description of problem:

teamd is using 100% cpu usage (of 1 core). teamd was started by NetworkManager for a loadbalance runner to team 2 interfaces together. this seems to have started after a recent upgrade. it didn't use to use 100% cpu. i haven't changed any configurations except for running updates.

the team still seems to be working fine though since i can see 2gbit throughput going out over the nm-team interface

Version-Release number of selected component (if applicable):
teamd-1.30-2.fc32.x86_64
libteam-1.30-2.fc32.x86_64
NetworkManager-team-1.22.14-1.fc32.x86_64

How reproducible:
create and start a team interface using configs:

==> /etc/sysconfig/network-scripts/ifcfg-Ethernet_connection_1 <==
NAME="Ethernet connection 1"
UUID=81274114-e6ae-4a0d-8250-9014d6ce1e9f
DEVICE=eno1
ONBOOT=yes
TEAM_MASTER=nm-team
DEVICETYPE=TeamPort
TEAM_MASTER_UUID=acb7eb99-81c6-4f23-9dc0-943fac44bab2

==> /etc/sysconfig/network-scripts/ifcfg-Ethernet_connection_2 <==
NAME="Ethernet connection 2"
UUID=448cec16-d828-456c-8f39-8874f1f9a9c0
DEVICE=eno2
ONBOOT=yes
TEAM_MASTER=nm-team
DEVICETYPE=TeamPort
TEAM_MASTER_UUID=acb7eb99-81c6-4f23-9dc0-943fac44bab2

==> /etc/sysconfig/network-scripts/ifcfg-Team_connection_1 <==
TEAM_CONFIG="{\"debug_level\": 0, \"runner\": {\"name\": \"loadbalance\", \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"], \"tx_balancer\": {\"name\": \"basic\"}}, \"ports\": {\"eno1\": {}, \"eno2\": {}}}"
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.1.10.123
PREFIX=24
GATEWAY=10.1.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Team connection 1"
UUID=acb7eb99-81c6-4f23-9dc0-943fac44bab2
DEVICE=nm-team
ONBOOT=yes
DEVICETYPE=Team


Actual results:
look in htop and teamd is stuck always at 100%

Expected results:
teamd to be at near 0% cpu usage like it was before

Additional info:

ltrace of teamd:

Aug 27 08:58:05.372929 team_get_option_name(0x557621b6ea90, 0x557621b6ea90, 0, 0x557621b55b78) = 0x557621b6eae0
Aug 27 08:58:05.373034 team_is_option_changed(0x557621b6ea90, 0x557621b6ea90, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.373138 team_get_next_option(0x557621b4c710, 0x557621b6ea90, 0, 0x557621b55b78) = 0x557621b6ea00
Aug 27 08:58:05.373243 team_get_option_name(0x557621b6ea00, 0x557621b6ea00, 0, 0x557621b55b78) = 0x557621b6ea50
Aug 27 08:58:05.373348 team_is_option_changed(0x557621b6ea00, 0x557621b6ea00, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.373452 team_get_next_option(0x557621b4c710, 0x557621b6ea00, 0, 0x557621b55b78) = 0x557621b6e970
Aug 27 08:58:05.373561 team_get_option_name(0x557621b6e970, 0x557621b6e970, 0, 0x557621b55b78) = 0x557621b6e9c0
Aug 27 08:58:05.373666 team_is_option_changed(0x557621b6e970, 0x557621b6e970, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.373771 team_get_next_option(0x557621b4c710, 0x557621b6e970, 0, 0x557621b55b78) = 0x557621b6e8e0
Aug 27 08:58:05.373875 team_get_option_name(0x557621b6e8e0, 0x557621b6e8e0, 0, 0x557621b55b78) = 0x557621b6e930
Aug 27 08:58:05.373981 team_is_option_changed(0x557621b6e8e0, 0x557621b6e8e0, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.374085 team_get_next_option(0x557621b4c710, 0x557621b6e8e0, 0, 0x557621b55b78) = 0x557621b6e850
Aug 27 08:58:05.374189 team_get_option_name(0x557621b6e850, 0x557621b6e850, 0, 0x557621b55b78) = 0x557621b6e8a0
Aug 27 08:58:05.374292 team_is_option_changed(0x557621b6e850, 0x557621b6e850, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.374397 team_get_next_option(0x557621b4c710, 0x557621b6e850, 0, 0x557621b55b78) = 0x557621b6e7c0
Aug 27 08:58:05.374502 team_get_option_name(0x557621b6e7c0, 0x557621b6e7c0, 0, 0x557621b55b78) = 0x557621b6e810
Aug 27 08:58:05.374617 team_is_option_changed(0x557621b6e7c0, 0x557621b6e7c0, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.374719 team_get_next_option(0x557621b4c710, 0x557621b6e7c0, 0, 0x557621b55b78) = 0x557621b6e730
Aug 27 08:58:05.374824 team_get_option_name(0x557621b6e730, 0x557621b6e730, 0, 0x557621b55b78) = 0x557621b6e780
Aug 27 08:58:05.374929 team_is_option_changed(0x557621b6e730, 0x557621b6e730, 0, 0x557621b55b78) = 0
Aug 27 08:58:05.375033 team_get_next_option(0x557621b4c710, 0x557621b6e730, 0, 0x557621b55b78) = 0x557621b6e6a0
Aug 27 08:58:05.375137 team_get_option_name(0x557621b6e6a0, 0x557621b6e6a0, 0, 0x557621b55b78) = 0x557621b6e6f0

strace of teamd:

recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88
sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569330, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569330, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569330, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569330, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569330, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36
select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9])
epoll_wait(9, [{EPOLLIN, {u32=7, u64=140728898420743}}], 2, -1) = 1
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88
sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569331, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569331, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569331, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569331, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569331, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36
select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9])
epoll_wait(9, [{EPOLLIN, {u32=7, u64=140728898420743}}], 2, -1) = 1
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88
sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569332, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569332, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569332, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569332, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569332, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36
select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9])
epoll_wait(9, [{EPOLLIN, {u32=7, u64=140728898420743}}], 2, -1) = 1
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88
recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88
sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569333, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569333, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569333, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36
recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569333, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569333, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36
select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9])



also, this may be unrelated, but previous to this issue, before updating the system, i was seeing these error messages every 5 seconds from nm-team on the system after it was up for a few days. i could never figure out what caused it, but the team still seemed to work. it seemed to just start doing it randomly.
Aug 26 20:48:24 server1 kernel: nm-team: Failed to send options change via netlink (err -105)
Aug 26 20:48:29 server1 kernel: nm-team: Failed to send options change via netlink (err -105)
Aug 26 20:48:35 server1 kernel: nm-team: Failed to send options change via netlink (err -105)

--- Additional comment from Patrick Talbert on 2020-08-31 09:01:00 UTC ---

err -105 is ENOBUFS No buffer space available.


This issue seems trivial to reproduce in F32:

$ rpm -q teamd
teamd-1.30-2.fc32.x86_64

# nmcli con add type team con-name team0 ifname team0 team.runner loadbalance team.runner-tx-hash eth,ipv4,ipv6 team.runner-tx-balancer basic ipv4.method disable ipv6.method ignore
# nmcli con add type ethernet ifname enp8s0 con-name enp8s0 master team0
# top -n 1 | head

top - 10:58:22 up 54 min,  2 users,  load average: 0.98, 0.56, 0.23
Tasks: 132 total,   2 running, 130 sleeping,   0 stopped,   0 zombie
%Cpu(s): 50.0 us,  0.0 sy,  0.0 ni, 50.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   1979.8 total,   1258.6 free,    165.8 used,    555.4 buff/cache
MiB Swap:    820.0 total,    820.0 free,      0.0 used.   1660.2 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                         
  18647 root      20   0    6696   3808   3448 R  93.8   0.2   3:51.81 teamd                           
      1 root      20   0  109464  17940   9648 S   0.0   0.9   0:01.61 systemd                         
      2 root      20   0       0      0      0 S   0.0   0.0   0:00.02 kthreadd   



Assuming this is something caused by a change in the teamd/libteam package. I'll try to bisect it.

Comment 1 Vladimir Benes 2020-08-31 10:04:31 UTC
we do see the exactly same behavior in rhel8.3 with this reproducer:

# nmcli con add type team con-name team0 ifname team0 team.runner loadbalance team.runner-tx-hash eth,ipv4,ipv6 team.runner-tx-balancer basic ipv4.method disable ipv6.method ignore
# nmcli con add type ethernet ifname enp8s0 con-name enp8s0 master team0
# top -n 1 | head


    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                     
  28211 root      20   0   59712   3152   2628 R  37,5   0,2   7:49.69 teamd                                                                                                                                       
  25631 root      20   0  841072  33456  31720 S  31,2   1,8   6:59.08 rsyslogd                                                                                                                                    
  20503 root      20   0  404460 210928 209428 S  25,0  11,3   6:13.08 systemd-journal    

with flood in logs:

srp 31 05:43:33 beaker-networkmanager-custom-upstream-1846 teamd_team0[28211]: eth5: Enabling port
srp 31 05:43:33 beaker-networkmanager-custom-upstream-1846 teamd_team0[28211]: Port eth5 rebalanced, delta: 0
srp 31 05:43:33 beaker-networkmanager-custom-upstream-1846 teamd_team0[28211]: <changed_option_list>
srp 31 05:43:33 beaker-networkmanager-custom-upstream-1846 teamd_team0[28211]: *enabled (port:eth5) true
srp 31 05:43:33 beaker-networkmanager-custom-upstream-1846 teamd_team0[28211]: </changed_option_list>


keeping the system heavy loaded

Comment 2 Vladimir Benes 2020-08-31 10:12:23 UTC
libteam-1.31-1.el8.x86_64
NetworkManager-1.26.0-6.el8.x86_64
kernel-4.18.0-234.el8.x86_64

Comment 7 LiLiang 2020-09-01 06:41:19 UTC
reproduced:

[root@hp-dl380g10-04 ~]# uname -r
4.18.0-234.el8.x86_64
[root@hp-dl380g10-04 ~]# rpm -q libteam
libteam-1.31-1.el8.x86_64

     teamd -d -c '{"runner":{"name":"loadbalance"}}'
     ip link set team0 up
     teamdctl team0 port add ens2f0
     teamdctl team0 port add ens2f1
     top


     2532 root      20   0   59716   2440   1836 R  99.7   0.0   0:31.80 teamd

Comment 8 LiLiang 2020-09-01 06:49:34 UTC
verified :

[root@hp-dl380g10-04 ~]# rpm -q libteam
libteam-1.31-2.el8.x86_64

[root@hp-dl380g10-04 ~]# ps -C teamd -o pid,pcpu,pmem,cmd
    PID %CPU %MEM CMD
   3062  0.0  0.0 teamd -d -c {"runner":{"name":"loadbalance"}}

Comment 9 Vladimir Benes 2020-09-01 11:46:36 UTC
Yeah, libteam-1.31-2.el8.x86_64 works well! Could you do proper build?

Comment 19 errata-xmlrpc 2020-11-04 01:53:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (libteam bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4512


Note You need to log in before you can comment on or make changes to this bug.