Bug 2065227

Summary: Customer is getting core dump while executing command "teamdctl <non team interface name> state dump"
Product: Red Hat Enterprise Linux 8 Reporter: Uday Patel <upatel>
Component: libteamAssignee: Xin Long <lxin>
Status: CLOSED ERRATA QA Contact: LiLiang <liali>
Severity: low Docs Contact:
Priority: unspecified    
Version: 8.5CC: jeriley, mpeterma, network-qe
Target Milestone: rcKeywords: Triaged
Target Release: 8.7Flags: pm-rhel: mirror+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libteam-1.31-3.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-16 09:03:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2130221    

Description Uday Patel 2022-03-17 14:06:23 UTC
Description of problem:

Customer is getting core dump while executing command  "teamdctl <non team interface name> state dump"

Version-Release number of selected component (if applicable):


How reproducible:

Execute below command to non-team interface.

Customer is getting core dump while executing command  "teamdctl <non team interface name> state dump"

For example:-

 teamdctl eno1 state dump


Steps to Reproduce:

1. teamdctl <non team interface name> state dump


Actual results:

libteamdctl: teamdctl_connect: Failed to connect using all CLIs.
teamdctl_connect failed (Invalid argument)

Expected results:

It should give an error.

Additional info:

Comment 1 Xin Long 2022-04-13 20:56:41 UTC
(In reply to Uday Patel from comment #0)
> Description of problem:
> 
> Customer is getting core dump while executing command  "teamdctl <non team
> interface name> state dump"
Hi, Uday,
Sorry for late.

I didn't get any core dump when running this cmd, and I couldn't see how this happens from code side.
can you please double check if that core dump was really caused by this cmd in Customer's environment? or can they deliver the core file to us?

> 
> libteamdctl: teamdctl_connect: Failed to connect using all CLIs.
> teamdctl_connect failed (Invalid argument)
> 

Currently, netlink API is used by 'teamd' only, 'teamdctl' is to check the device by using ioctl syscall where we can only check if this device exists and can't check the type of the device.  That's why when <interface name> exists, it could go to teamdctl_connect().

Then in teamdctl_connect() it connects to the 'teamd' daemon by "interface name", as the teamd daemon uses "interface name" to create the unix sock server. This part doesn't really check if team device exits, although one teamd daemon normally binds to a team device. So "Failed to connect using all CLIs" can give you an idea that the unix sock server doesn't exist, and the team device may not exist either.

Comment 2 Xin Long 2022-04-13 22:38:32 UTC
(In reply to Xin Long from comment #1)
> (In reply to Uday Patel from comment #0)
> > Description of problem:
> > 
> > Customer is getting core dump while executing command  "teamdctl <non team
> > interface name> state dump"
> Hi, Uday,
> Sorry for late.
> 
> I didn't get any core dump when running this cmd, and I couldn't see how
> this happens from code side.
> can you please double check if that core dump was really caused by this cmd
> in Customer's environment? or can they deliver the core file to us?
> 
After checking customer's case, I can reproduce the core dump by using device name including ".", like vlan device.
The core dump is actually a dbus error, before calling cli_method_call() -> dbus_message_new_method_call(), the dbus name should have been validated first:

diff --git a/libteamdctl/cli_dbus.c b/libteamdctl/cli_dbus.c
index dfef5c4..a2fe4ac 100644
--- a/libteamdctl/cli_dbus.c
+++ b/libteamdctl/cli_dbus.c
@@ -184,17 +184,18 @@ static int cli_dbus_init(struct teamdctl *tdc, const char *team_name, void *priv
                return -errno;

        dbus_error_init(&error);
+       if (!dbus_validate_bus_name(cli_dbus->service_name, &error))
+               goto free_service_name;

I will post the fix on upstream first.

Thanks.

Comment 8 LiLiang 2022-10-01 03:45:39 UTC
reproduced:

[root@dell-per740-83 mode4]# rpm -q teamd
teamd-1.31-2.el8.x86_64
[root@dell-per740-83 mode4]# rpm -q libteam
libteam-1.31-2.el8.x86_64

[root@dell-per740-83 mode4]# cat re
slave1=ens1
slave2=ens5

teamd -d
ip link set team0 up
ip link set $slave1 down
ip link set $slave2 down
teamdctl team0 port add $slave1
teamdctl team0 port add $slave2

ip link add link team0 name team0.3 type vlan id 3
ip link set team0.3 up

teamdctl team0.3 state dump
[root@dell-per740-83 mode4]# source re
This program is not intended to be run as root.
dbus[27095]: arguments to dbus_message_new_method_call() were incorrect, assertion "destination == NULL || _dbus_check_is_valid_bus_name (destination)" failed in file ../../dbus/dbus-message.c line 1365.
This is normally a bug in some application using the D-Bus library.

  D-Bus not built with -rdynamic so unable to print a backtrace
Aborted (core dumped)

Comment 9 LiLiang 2022-10-01 03:48:09 UTC
Tested:

[root@dell-per740-83 mode4]# rpm -q libteam
libteam-1.31-3.el8.x86_64
[root@dell-per740-83 mode4]# rpm -q teamd
teamd-1.31-3.el8.x86_64
[root@dell-per740-83 mode4]# source re
This program is not intended to be run as root.
RTNETLINK answers: File exists
libteamdctl: teamdctl_connect: Failed to connect using all CLIs.
teamdctl_connect failed (Invalid argument)
[root@dell-per740-83 mode4]# cat re
slave1=ens1
slave2=ens5

teamd -d
ip link set team0 up
ip link set $slave1 down
ip link set $slave2 down
teamdctl team0 port add $slave1
teamdctl team0 port add $slave2

ip link add link team0 name team0.3 type vlan id 3
ip link set team0.3 up

teamdctl team0.3 state dump

Comment 18 errata-xmlrpc 2023-05-16 09:03:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (libteam bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2956