Description of problem: teamd is using 100% cpu usage (of 1 core). teamd was started by NetworkManager for a loadbalance runner to team 2 interfaces together. this seems to have started after a recent upgrade. it didn't use to use 100% cpu. i haven't changed any configurations except for running updates. the team still seems to be working fine though since i can see 2gbit throughput going out over the nm-team interface Version-Release number of selected component (if applicable): teamd-1.30-2.fc32.x86_64 libteam-1.30-2.fc32.x86_64 NetworkManager-team-1.22.14-1.fc32.x86_64 How reproducible: create and start a team interface using configs: ==> /etc/sysconfig/network-scripts/ifcfg-Ethernet_connection_1 <== NAME="Ethernet connection 1" UUID=81274114-e6ae-4a0d-8250-9014d6ce1e9f DEVICE=eno1 ONBOOT=yes TEAM_MASTER=nm-team DEVICETYPE=TeamPort TEAM_MASTER_UUID=acb7eb99-81c6-4f23-9dc0-943fac44bab2 ==> /etc/sysconfig/network-scripts/ifcfg-Ethernet_connection_2 <== NAME="Ethernet connection 2" UUID=448cec16-d828-456c-8f39-8874f1f9a9c0 DEVICE=eno2 ONBOOT=yes TEAM_MASTER=nm-team DEVICETYPE=TeamPort TEAM_MASTER_UUID=acb7eb99-81c6-4f23-9dc0-943fac44bab2 ==> /etc/sysconfig/network-scripts/ifcfg-Team_connection_1 <== TEAM_CONFIG="{\"debug_level\": 0, \"runner\": {\"name\": \"loadbalance\", \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"], \"tx_balancer\": {\"name\": \"basic\"}}, \"ports\": {\"eno1\": {}, \"eno2\": {}}}" PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=10.1.10.123 PREFIX=24 GATEWAY=10.1.10.1 DNS1=8.8.8.8 DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME="Team connection 1" UUID=acb7eb99-81c6-4f23-9dc0-943fac44bab2 DEVICE=nm-team ONBOOT=yes DEVICETYPE=Team Actual results: look in htop and teamd is stuck always at 100% Expected results: teamd to be at near 0% cpu usage like it was before Additional info: ltrace of teamd: Aug 27 08:58:05.372929 team_get_option_name(0x557621b6ea90, 0x557621b6ea90, 0, 0x557621b55b78) = 0x557621b6eae0 Aug 27 08:58:05.373034 team_is_option_changed(0x557621b6ea90, 0x557621b6ea90, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.373138 team_get_next_option(0x557621b4c710, 0x557621b6ea90, 0, 0x557621b55b78) = 0x557621b6ea00 Aug 27 08:58:05.373243 team_get_option_name(0x557621b6ea00, 0x557621b6ea00, 0, 0x557621b55b78) = 0x557621b6ea50 Aug 27 08:58:05.373348 team_is_option_changed(0x557621b6ea00, 0x557621b6ea00, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.373452 team_get_next_option(0x557621b4c710, 0x557621b6ea00, 0, 0x557621b55b78) = 0x557621b6e970 Aug 27 08:58:05.373561 team_get_option_name(0x557621b6e970, 0x557621b6e970, 0, 0x557621b55b78) = 0x557621b6e9c0 Aug 27 08:58:05.373666 team_is_option_changed(0x557621b6e970, 0x557621b6e970, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.373771 team_get_next_option(0x557621b4c710, 0x557621b6e970, 0, 0x557621b55b78) = 0x557621b6e8e0 Aug 27 08:58:05.373875 team_get_option_name(0x557621b6e8e0, 0x557621b6e8e0, 0, 0x557621b55b78) = 0x557621b6e930 Aug 27 08:58:05.373981 team_is_option_changed(0x557621b6e8e0, 0x557621b6e8e0, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.374085 team_get_next_option(0x557621b4c710, 0x557621b6e8e0, 0, 0x557621b55b78) = 0x557621b6e850 Aug 27 08:58:05.374189 team_get_option_name(0x557621b6e850, 0x557621b6e850, 0, 0x557621b55b78) = 0x557621b6e8a0 Aug 27 08:58:05.374292 team_is_option_changed(0x557621b6e850, 0x557621b6e850, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.374397 team_get_next_option(0x557621b4c710, 0x557621b6e850, 0, 0x557621b55b78) = 0x557621b6e7c0 Aug 27 08:58:05.374502 team_get_option_name(0x557621b6e7c0, 0x557621b6e7c0, 0, 0x557621b55b78) = 0x557621b6e810 Aug 27 08:58:05.374617 team_is_option_changed(0x557621b6e7c0, 0x557621b6e7c0, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.374719 team_get_next_option(0x557621b4c710, 0x557621b6e7c0, 0, 0x557621b55b78) = 0x557621b6e730 Aug 27 08:58:05.374824 team_get_option_name(0x557621b6e730, 0x557621b6e730, 0, 0x557621b55b78) = 0x557621b6e780 Aug 27 08:58:05.374929 team_is_option_changed(0x557621b6e730, 0x557621b6e730, 0, 0x557621b55b78) = 0 Aug 27 08:58:05.375033 team_get_next_option(0x557621b4c710, 0x557621b6e730, 0, 0x557621b55b78) = 0x557621b6e6a0 Aug 27 08:58:05.375137 team_get_option_name(0x557621b6e6a0, 0x557621b6e6a0, 0, 0x557621b55b78) = 0x557621b6e6f0 strace of teamd: recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88 sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569330, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569330, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569330, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569330, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569330, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36 select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9]) epoll_wait(9, [{EPOLLIN, {u32=7, u64=140728898420743}}], 2, -1) = 1 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88 sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569331, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569331, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569331, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569331, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569331, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36 select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9]) epoll_wait(9, [{EPOLLIN, {u32=7, u64=140728898420743}}], 2, -1) = 1 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88 sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569332, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569332, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569332, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569332, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569332, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36 select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9]) epoll_wait(9, [{EPOLLIN, {u32=7, u64=140728898420743}}], 2, -1) = 1 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 88 recvmsg(7, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=0x000020}, msg_namelen=12, msg_iov=[{iov_base=[{{len=72, type=team, flags=NLM_F_MULTI, seq=0, pid=0}, "\x02\x01\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x2c\x00\x02\x00\x28\x00\x01\x00\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, {len=16, type=NLMSG_DONE, flags=NLM_F_MULTI, seq=0, pid=0}], iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 88 sendmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569333, pid=-171963616}, "\x01\x00\x00\x00\x08\x00\x01\x00\x05\x00\x00\x00\x28\x00\x02\x80\x24\x00\x01\x80\x0c\x00\x01\x00\x65\x6e\x61\x62\x6c\x65\x64\x00"...}, iov_len=68}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 68 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569333, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569333, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_PEEK|MSG_TRUNC) = 36 recvmsg(6, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base={{len=36, type=NLMSG_ERROR, flags=NLM_F_CAPPED, seq=1784569333, pid=-171963616}, {error=0, msg={len=68, type=team, flags=NLM_F_REQUEST|NLM_F_ACK, seq=1784569333, pid=-171963616}}}, iov_len=16384}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36 select(20, [3 9 10 15 16 17 19], [], [16], NULL) = 1 (in [9]) also, this may be unrelated, but previous to this issue, before updating the system, i was seeing these error messages every 5 seconds from nm-team on the system after it was up for a few days. i could never figure out what caused it, but the team still seemed to work. it seemed to just start doing it randomly. Aug 26 20:48:24 server1 kernel: nm-team: Failed to send options change via netlink (err -105) Aug 26 20:48:29 server1 kernel: nm-team: Failed to send options change via netlink (err -105) Aug 26 20:48:35 server1 kernel: nm-team: Failed to send options change via netlink (err -105)
err -105 is ENOBUFS No buffer space available. This issue seems trivial to reproduce in F32: $ rpm -q teamd teamd-1.30-2.fc32.x86_64 # nmcli con add type team con-name team0 ifname team0 team.runner loadbalance team.runner-tx-hash eth,ipv4,ipv6 team.runner-tx-balancer basic ipv4.method disable ipv6.method ignore # nmcli con add type ethernet ifname enp8s0 con-name enp8s0 master team0 # top -n 1 | head top - 10:58:22 up 54 min, 2 users, load average: 0.98, 0.56, 0.23 Tasks: 132 total, 2 running, 130 sleeping, 0 stopped, 0 zombie %Cpu(s): 50.0 us, 0.0 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 1979.8 total, 1258.6 free, 165.8 used, 555.4 buff/cache MiB Swap: 820.0 total, 820.0 free, 0.0 used. 1660.2 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18647 root 20 0 6696 3808 3448 R 93.8 0.2 3:51.81 teamd 1 root 20 0 109464 17940 9648 S 0.0 0.9 0:01.61 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd Assuming this is something caused by a change in the teamd/libteam package. I'll try to bisect it.
well, someone seems to have already found the cause of the issue: https://github.com/jpirko/libteam/issues/52#issuecomment-661246849
It seems to be caused by the following commit: https://github.com/jpirko/libteam/commit/deadb5b715227429a1879b187f5906b39151eca9 I'll stare at this a bit to see if I can spot the flaw.
In the current 1.30+ code (with the deadb commit applied), pretty much any time teamd_port_check_enable() is called with should_enable set, then the function will call team_set_port_enabled(). Regardless of the current enabled state, this results in a netlink message being sent to the kernel side to set the 'enabled' state for the port. In turn, the kernel emits a change message which is picked up by userspace which triggers teamd_port_check_enable(), etc etc etc. In other words, since the deadb commit, at no point is either side (userspace nor the kernel) checking whether there was actually a change of the 'enabled' state. So we just go around and around telling everybody about the cool new state. The easy solution seems to be to revert the deadb commit but I think we should be able to come up with something which doesn't Break Things and also solves the race described by the deadb commit.
This issue is still happening on 1.31 on Fedora 33.
This message is a reminder that Fedora 32 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 32 on 2021-05-25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '32'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 32 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 32 changed to end-of-life (EOL) status on 2021-05-25. Fedora 32 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.
This issue is still present on Fedora 34.
Fixed by: commit 61efd6de2fbb8ee077863ee5a355ac3dfd9365b9 Author: Xin Long <lucien.xin> Date: Tue Sep 1 13:59:27 2020 +0800 Revert "teamd: Disregard current state when considering port enablement" not sure if libteam-1.32 will release soon so that we can rebase and get it included in Fedora 34.
1. Download and install source rpm. 2. Create revert_deadb5b715227429a1879b187f5906b39151eca9.patch in rpmbuild/SOURCE. 3. Create libteam.spec.patch in rpmbuild/SPEC then apply it. 4. Enjoy revert_deadb5b715227429a1879b187f5906b39151eca9.patch ===================copy below=========================== diff --git a/teamd/teamd_per_port.c b/teamd/teamd_per_port.c index 166da57..d429753 100644 --- a/teamd/teamd_per_port.c +++ b/teamd/teamd_per_port.c @@ -442,14 +442,18 @@ int teamd_port_check_enable(struct teamd_context *ctx, bool should_enable, bool should_disable) { bool new_enabled_state; + bool curr_enabled_state; int err; if (!teamd_port_present(ctx, tdport)) return 0; + err = teamd_port_enabled(ctx, tdport, &curr_enabled_state); + if (err) + return err; - if (should_enable) + if (!curr_enabled_state && should_enable) new_enabled_state = true; - else if (should_disable) + else if (curr_enabled_state && should_disable) new_enabled_state = false; else return 0; =====================copy above========================== libteam.spec.patch =====================copy below======================= --- ../libteam.spec 2021-01-27 18:12:30.000000000 +0800 +++ libteam.spec 2021-09-06 13:27:28.240453762 +0800 @@ -1,11 +1,13 @@ Name: libteam Version: 1.31 -Release: 3%{?dist} +Release: 4%{?dist} Summary: Library for controlling team network device License: LGPLv2+ URL: http://www.libteam.org Source: http://www.libteam.org/files/libteam-%{version}.tar.gz +Patch0: revert_deadb5b715227429a1879b187f5906b39151eca9.patch + BuildRequires: gcc BuildRequires: jansson-devel BuildRequires: libdaemon-devel @@ -63,6 +65,7 @@ %prep %setup -q +%patch0 -p1 # prepare example dir for -devel mkdir -p _tmpdoc1/examples ======================copy above======================
This message is a reminder that Fedora Linux 34 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora Linux 34 on 2022-06-07. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a 'version' of '34'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, change the 'version' to a later Fedora Linux version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora Linux 34 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora Linux, you are encouraged to change the 'version' to a later version prior to this bug being closed.
Can you people please stop doing this? I'm sick of seeing "wahhhhh this bug is from a slightly old release so auto-close it even if it's valid". There is a new version of Fedora every 6 months, and it takes you years to fix bugs sometimes; how am I supposed to keep this open if I can't change the version? This is the exact reason I do not submit bugs for RHEL, CentOS, or Fedora anymore, but have no problem submitting dozens to projects like VyOS and FRR, who work with much more limited staff resources. People go through the work to create, follow, and comment on these issues, so it's a slap in the face to have your automation close so many issues haphazardly without anyone able to stop it. Stop doing this, I can't say it enough times....
Fedora Linux 34 entered end-of-life (EOL) status on 2022-06-07. Fedora Linux 34 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. Thank you for reporting this bug and we are sorry it could not be fixed.