RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1383997 - Incorrect configuration process of teaming over vlan adapters, containing dot in name
Summary: Incorrect configuration process of teaming over vlan adapters, containing dot...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libteam
Version: 7.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Marcelo Ricardo Leitner
QA Contact: Amit Supugade
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-12 10:36 UTC by riddimshocker
Modified: 2020-07-16 08:56 UTC (History)
9 users (show)

Fixed In Version: libteam-1.25-5.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 23:07:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2201 0 normal SHIPPED_LIVE libteam bug fix update 2017-08-01 19:41:39 UTC

Description riddimshocker 2016-10-12 10:36:40 UTC
Description of problem:

Got to configure active-backup like this:

    enp1s0f0  --- enp1s0f0.11 -- iSCSI IP 1
              \ enp1s0f0.36 --------------------- 
                                                 \ __ team36 -- IP
    enp1s0f1  --- enp1s0f1.12 -- iSCSI IP 2      /
              \ enp1s0f1.36 ---------------------

ifcfg-enp1s0f0.36 conf file:
VLAN=yes
DEVICE=enp1s0f0.36
PHYSDEV=enp1s0f0
ONBOOT=yes
DEVICETYPE="TeamPort"
TEAM_MASTER="team36"

ifcfg-enp1s0f1.36 conf file:
VLAN=yes
DEVICE=enp1s0f1.36
PHYSDEV=enp1s0f1
ONBOOT=yes
DEVICETYPE="TeamPort"
TEAM_MASTER="team36"

ifcfg-team36 conf file:
DEVICE=team36
DEVICETYPE="Team"
ONBOOT=yes
BOOTPROTO=none
IPADDR="172.31.36.15"
PREFIX=24
TEAM_CONFIG='{ "runner" : {  "name" : "activebackup" }, "link_watch" : {  "name" : "ethtool" } }'

After team36 init I'm trying to check it's status, which is "Failed":
:~$ teamdctl team36 state view -v
setup:
  runner: activebackup
  kernel team mode: activebackup
  D-BUS enabled: yes
  ZeroMQ enabled: no
  debug level: 0
  daemonized: no
  PID: 5456
  PID file: /var/run/teamd/team36.pid
ports:
Failed to parse JSON port dump.
command call failed (Invalid argument)

:~$ teamdctl team36 state dump
{
    "ports": {
        "enp1s0f0": {
            "36": {
                "ifinfo": {
                    "dev_addr": "a0:36:9f:78:d7:54",
                    "dev_addr_len": 6,
                    "ifindex": 24,
                    "ifname": "enp1s0f0.36"
                },
                "link": {
                    "duplex": "half",
                    "speed": 0,
                    "up": false
                },
                "link_watches": {
                    "list": {
                        "link_watch_0": {
                            "delay_down": 0,
                            "delay_up": 0,
                            "down_count": 1,
                            "name": "ethtool",
                            "up": false
                        }
                    },
                    "up": false
                }
            }
        },
        "enp1s0f1": {
            "36": {
                "ifinfo": {
                    "dev_addr": "a0:36:9f:78:d7:54",
                    "dev_addr_len": 6,
                    "ifindex": 25,
                    "ifname": "enp1s0f1.36"
                },
                "link": {
                    "duplex": "half",
                    "speed": 0,
                    "up": false
                },
                "link_watches": {
                    "list": {
                        "link_watch_0": {
                            "delay_down": 0,
                            "delay_up": 0,
                            "down_count": 1,
                            "name": "ethtool",
                            "up": false
                        }
                    },
                    "up": false
                }
            }
        }
    },
    "runner": {
        "active_port": ""
    },
    "setup": {
        "daemonized": false,
        "dbus_enabled": true,
        "debug_level": 0,
        "kernel_team_mode_name": "activebackup",
        "pid": 2382,
        "pid_file": "/var/run/teamd/team36.pid",
        "runner_name": "activebackup",
        "zmq_enabled": false
    },
    "team_device": {
        "ifinfo": {
            "dev_addr": "a0:36:9f:78:d7:54",
            "dev_addr_len": 6,
            "ifindex": 23,
            "ifname": "team36"
        }
    }
}


The "port" section, as you can see, is splitted in two:
"ports": {
        "enp1s0f0": {
            "36": {

If I use "vlan36" kind of names for vlan interfaces, teaming works well and 'teamdctl team36 state view -v' doesn't produce any error. "ports" section's structure is correct, doesn't contain no fragmentation:
:~$ teamdctl team36 state dump
..........
"ports": {
        "vlan36": {
                "ifinfo": {
                    "dev_addr": "a0:36:9f:78:d7:54",
                    "dev_addr_len": 6,
                    "ifindex": 24,
                    "ifname": "enp1s0f0.36"
                },
..........


But I can't have "vlan36" for name, only "enp1s0f0.36 enp1s0f1.36", cause I use same VLAN numbers (VID) on different adapters.

According to this all, teaming processes dot-containing interface names incorrectly.

Version-Release number of selected component (if applicable):

:~$ lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.2.1511 (Core)
Release:        7.2.1511
Codename:       Core
:~$ uname -a
Linux clu-node5 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
:~$ teamd --version
teamd 1.17

How reproducible:
Always

Comment 2 Xin Long 2016-10-12 14:40:12 UTC
in get_port_obj:
json_unpack will analyse dot as the symbol of different level nodes.
so This should be a jansson api issue, will check if there is a better api in jansson for this later.

Comment 3 Hangbin Liu 2016-11-29 06:54:44 UTC
update: actually jansson work fine. The true reason is in function __teamd_json_path_lite_va() the path looks like ".ports.eth1.2.queue_id" and we split it via ".", which make eth1.2 become two levels. I will check how to fix it.

Comment 5 Hangbin Liu 2017-03-21 01:45:58 UTC
re-assigne back to Marcelo since we have the fix now.

Comment 7 Amit Supugade 2017-04-04 19:16:13 UTC
Verified on- libteam-1.25-5.el7.x86_64

LOG-
[root@sam ~]# uname -r 
3.10.0-637.el7.x86_64
[root@sam ~]# rpm -q teamd libteam
teamd-1.25-5.el7.x86_64
libteam-1.25-5.el7.x86_64
[root@sam ~]# 
[root@sam ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp7s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq portid 0100000000000000000000333135384643 state UP qlen 1000
    link/ether 00:90:fa:8a:5b:fa brd ff:ff:ff:ff:ff:ff
3: enp7s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq portid 0200000000000000000000333135384643 state UP qlen 1000
    link/ether 00:90:fa:8a:5c:02 brd ff:ff:ff:ff:ff:ff
4: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether e4:11:5b:dd:e6:6c brd ff:ff:ff:ff:ff:ff
    inet 10.19.15.26/24 brd 10.19.15.255 scope global dynamic enp5s0f0
       valid_lft 85180sec preferred_lft 85180sec
    inet6 2620:52:0:130b:e611:5bff:fedd:e66c/64 scope global noprefixroute dynamic 
       valid_lft 2591778sec preferred_lft 604578sec
    inet6 fe80::e611:5bff:fedd:e66c/64 scope link 
       valid_lft forever preferred_lft forever
5: enp5s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether e4:11:5b:dd:e6:6d brd ff:ff:ff:ff:ff:ff
[root@sam ~]# ip link add link enp7s0f0 name enp7s0f0.3 type vlan id 3
[root@sam ~]# ip link add link enp7s0f1 name enp7s0f1.3 type vlan id 3
[root@sam ~]# 
[root@sam ~]# #team_json='{ "runner" : {  "name" : "activebackup" }, "link_watch" : {  "name" : "ethtool" } }'
[root@sam ~]# teamd -d -t team3 -c '{ "runner" : {  "name" : "activebackup" }, "link_watch" : {  "name" : "ethtool" } }'
This program is not intended to be run as root.
[root@sam ~]# teamdctl team3 port add enp7s0f0.3
[root@sam ~]# teamdctl team3 port add enp7s0f1.3
[root@sam ~]# 
[root@sam ~]# ip link set team3 up
[root@sam ~]# teamdctl team3 state view -v
setup:
  runner: activebackup
  kernel team mode: activebackup
  D-BUS enabled: no
  ZeroMQ enabled: no
  debug level: 0
  daemonized: yes
  PID: 2482
  PID file: /var/run/teamd/team3.pid
ports:
  enp7s0f0.3
    ifindex: 6
    addr: 00:90:fa:8a:5b:fa
    ethtool link: 0mbit/halfduplex/up
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
        link up delay: 0
        link down delay: 0
  enp7s0f1.3
    ifindex: 7
    addr: 00:90:fa:8a:5b:fa
    ethtool link: 0mbit/halfduplex/up
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
        link up delay: 0
        link down delay: 0
runner:
  active port: enp7s0f0.3
[root@sam ~]# teamdctl team3 state dump
{
    "ports": {
        "enp7s0f0.3": {
            "ifinfo": {
                "dev_addr": "00:90:fa:8a:5b:fa",
                "dev_addr_len": 6,
                "ifindex": 6,
                "ifname": "enp7s0f0.3"
            },
            "link": {
                "duplex": "half",
                "speed": 0,
                "up": true
            },
            "link_watches": {
                "list": {
                    "link_watch_0": {
                        "delay_down": 0,
                        "delay_up": 0,
                        "down_count": 0,
                        "name": "ethtool",
                        "up": true
                    }
                },
                "up": true
            }
        },
        "enp7s0f1.3": {
            "ifinfo": {
                "dev_addr": "00:90:fa:8a:5b:fa",
                "dev_addr_len": 6,
                "ifindex": 7,
                "ifname": "enp7s0f1.3"
            },
            "link": {
                "duplex": "half",
                "speed": 0,
                "up": true
            },
            "link_watches": {
                "list": {
                    "link_watch_0": {
                        "delay_down": 0,
                        "delay_up": 0,
                        "down_count": 0,
                        "name": "ethtool",
                        "up": true
                    }
                },
                "up": true
            }
        }
    },
    "runner": {
        "active_port": "enp7s0f0.3"
    },
    "setup": {
        "daemonized": true,
        "dbus_enabled": false,
        "debug_level": 0,
        "kernel_team_mode_name": "activebackup",
        "pid": 2482,
        "pid_file": "/var/run/teamd/team3.pid",
        "runner_name": "activebackup",
        "zmq_enabled": false
    },
    "team_device": {
        "ifinfo": {
            "dev_addr": "00:90:fa:8a:5b:fa",
            "dev_addr_len": 6,
            "ifindex": 8,
            "ifname": "team3"
        }
    }
}
[root@sam ~]#

Comment 8 errata-xmlrpc 2017-08-01 23:07:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2201


Note You need to log in before you can comment on or make changes to this bug.