Bug 1533847

Summary: team up with one port even if min-ports set to 2
Product: Red Hat Enterprise Linux 7 Reporter: Vladimir Benes <vbenes>
Component: libteamAssignee: Xin Long <lxin>
Status: CLOSED ERRATA QA Contact: LiLiang <liali>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.5CC: atragler, fgiudici, mleitner, tredaelli, vbenes
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libteam-1.27-5.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-30 11:43:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vladimir Benes 2018-01-12 11:19:49 UTC
Description of problem:
I created team with two slaves and "min_ports": 2,

I have just one slave active but team master is up too, shouldn't it be down? or no_carrier?


[root@gsm-r5s9-01 ~]# teamdctl nm-team conf dump
{
    "device": "nm-team",
    "ports": {
        "eth1": {
            "link_watch": {
                "name": "ethtool"
            }
        },
        "eth2": {
            "link_watch": {
                "name": "ethtool"
            }
        }
    },
    "runner": {
        "min_ports": 2,
        "name": "lacp",
        "tx_hash": [
            "eth",
            "ipv4",
            "ipv6"
        ]
    }
}

[root@gsm-r5s9-01 ~]# teamdctl nm-team state dump
{
    "runner": {
        "active": true,
        "fast_rate": false,
        "select_policy": "lacp_prio",
        "sys_prio": 65535
    },
    "setup": {
        "daemonized": false,
        "dbus_enabled": true,
        "debug_level": 2,
        "kernel_team_mode_name": "loadbalance",
        "pid": 14905,
        "pid_file": "/var/run/teamd/nm-team.pid",
        "runner_name": "lacp",
        "zmq_enabled": false
    },
    "team_device": {
        "ifinfo": {
            "dev_addr": "8a:0e:09:35:c0:4b",
            "dev_addr_len": 6,
            "ifindex": 525,
            "ifname": "nm-team"
        }
    }
}

NAME       UUID                                  TYPE      DEVICE  
team0      8ad1c07a-820a-4c93-b357-ff7d2ee9398e  team      nm-team 
team0.0    d405b293-6cc9-4ebd-815e-c60489d656b8  ethernet  eth1    
testeth0   afc0689e-e22f-4219-afe9-add03005e3c0  ethernet  eth0    
team0.1    d7edd67d-e61c-40f8-9388-3e241a714f8b  ethernet  --   


525: nm-team: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8a:0e:09:35:c0:4b brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/24 brd 1.2.3.255 scope global noprefixroute nm-team
       valid_lft forever preferred_lft forever
    inet6 fe80::401f:f839:4b99:4596/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
5: eth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master nm-team state UP group default qlen 1000
    link/ether 8a:0e:09:35:c0:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: eth2@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:f2:a4:71:7b:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Version-Release number of selected component (if applicable):
kernel-3.10.0-826.el7.x86_64
libteam-1.27-1.el7.x86_64
NetworkManager-1.10.2-3.el7.x86_64

Comment 1 Xin Long 2018-02-25 14:58:47 UTC
(In reply to Vladimir Benes from comment #0)
> Description of problem:
> I created team with two slaves and "min_ports": 2,
> 
> I have just one slave active but team master is up too, shouldn't it be
> down? or no_carrier?
yes, it should be no_carrier IF it's only one active slave.

> 525: nm-team: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP group default qlen 1000
>     link/ether 8a:0e:09:35:c0:4b brd ff:ff:ff:ff:ff:ff
>     inet 1.2.3.4/24 brd 1.2.3.255 scope global noprefixroute nm-team
>        valid_lft forever preferred_lft forever
>     inet6 fe80::401f:f839:4b99:4596/64 scope link noprefixroute 
>        valid_lft forever preferred_lft forever
> 5: eth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
> nm-team state UP group default qlen 1000
>     link/ether 8a:0e:09:35:c0:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
> 7: eth2@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP group default qlen 1000
>     link/ether f6:f2:a4:71:7b:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> 
But the state of both eth1 and eth2 is "UP", can I ask how you made the other port "inactive"?
If it was "teamnl -p veth1 team0 setoption enabled false", it would not work, as teamd doesn't monitor it.
If it was something causing dev's state change, it should work fine as in my env.

Thanks.

Comment 2 Vladimir Benes 2018-03-05 11:22:42 UTC
nmcli connection add type team con-name team0 ifname nm-team autoconnect no ip4 1.2.3.4/24 team.runner lacp team.runner-min-ports 2
nmcli connection add type team-slave ifname eth1 con-name team0.0 autoconnect no master nm-team
nmcli connection add type team-slave ifname eth2 con-name team0.1 autoconnect no master nm-team


nmcli con up team0.0

as you can see eth2 is up but is not marked as nm-team's slave

5: eth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master nm-team state UP group default qlen 1000
    link/ether 4e:3a:c1:96:f1:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: eth2@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:88:65:84:ef:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
27: nm-team: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:3a:c1:96:f1:39 brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/24 brd 1.2.3.255 scope global noprefixroute nm-team
       valid_lft forever preferred_lft forever
    inet6 fe80::39e8:47ec:6771:6992/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever


is this something that should be handled in NM?

Comment 3 Xin Long 2018-03-29 10:02:41 UTC
Just identified the issue,
It actually worked on my teamd which is built with 'make' in my env, and
# ./configure |grep rtnl_link_set_carrier
checking for rtnl_link_set_carrier in -lnl-route-3... yes <---

I checked the build log in brew env on:
http://download-node-02.eng.bos.redhat.com/brewroot/packages/libteam/1.27/4.el7/data/logs/x86_64/build.log

checking for rtnl_link_set_carrier in -lnl-route-3...  no <---


Without rtnl_link_set_carrier supported in libnl3, team_carrier_set in libteam can't really update dev's flags:
TEAM_EXPORT
int team_carrier_set(struct team_handle *th, bool carrier_up)
{
#ifdef HAVE_RTNL_LINK_SET_CARRIER <--- no
        [...]
        rtnl_link_set_ifindex(link, th->ifindex);
        rtnl_link_set_carrier(link, carrier_up ? 1 : 0);
        [...]
        return err;
#else
        return -EOPNOTSUPP;
#endif
}

That's why we couldn't see <NO-CARRIER> flag in team device.

Not sure if we can upgrade the libnl3 package on brew env and rebuild libteam if we want to see this flag? Otherwise, 'min-ports' parameter can't really work.

Comment 4 Marcelo Ricardo Leitner 2018-04-09 13:18:34 UTC
Clearing need-info for now. We talked and Xin will try a build with a BuildRequires forcing the versioning, and then we will talk with rel-eng if it still fails.

Comment 7 LiLiang 2018-09-05 09:19:03 UTC
verified on :
[root@hp-dl380pg8-15 sriov]# rpm -qa|grep libteam
libteam-1.27-5.el7.x86_64

[root@hp-dl380pg8-15 sriov]# teamd -d -t team0 -c '{"runner":{"name":"lacp","min_ports":2}}'
[root@hp-dl380pg8-15 sriov]# ip link set ens2f0 master team0
[root@hp-dl380pg8-15 sriov]# ip link set ens2f1 master team0
[root@hp-dl380pg8-15 sriov]# ip link show team0
26: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:90:fa:2a:65:82 brd ff:ff:ff:ff:ff:ff

[root@hp-dl380pg8-15 sriov]# ip link set ens2f1 nomaster
[root@hp-dl380pg8-15 sriov]# ip link show team0
26: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:90:fa:2a:65:82 brd ff:ff:ff:ff:ff:ff

Comment 9 errata-xmlrpc 2018-10-30 11:43:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3297