RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1479384 - team loadbalance runner in active mode does not work properly
Summary: team loadbalance runner in active mode does not work properly
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libteam
Version: 7.4
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: rc
: ---
Assignee: Xin Long
QA Contact: Network QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-08 13:17 UTC by Amit Supugade
Modified: 2020-10-19 14:07 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-06 05:15:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Amit Supugade 2017-08-08 13:17:27 UTC
Description of problem:
team loadbalance runner in active mode does not work properly

Version-Release number of selected component (if applicable):
libteam-1.25-5.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
slave1=p1p1
slave2=p1p2
target_ip=192.168.1.254

get_diff_percentage () {
        MAX=0
        MIN=0
        PERCENT=100
        if (( $1 < $2 ))
        then
                MAX=$2
                MIN=$1
        else
                MAX=$1
                MIN=$2
        fi

        let "PERCENT=($MAX-$MIN)*100/$MAX"
        return $PERCENT
}

teamd -d -t team0 -c '{ "runner" : { "name" : "loadbalance", "tx_balancer" : { "name" : "basic" } }, "link_watch" : { "name" : "ethtool" } }'
teamdctl team0 port add $slave1
teamdctl team0 port add $slave2
ip link set team0 up
pkill dhclient; sleep 3; dhclient -v team0

tx_count_before_slave1=$(cat /sys/class/net/$slave1/statistics/tx_packets)
tx_count_before_slave2=$(cat /sys/class/net/$slave2/statistics/tx_packets)

ping -c 20 $target_ip

tx_count_after_slave1=$(cat /sys/class/net/$slave1/statistics/tx_packets)
tx_count_after_slave2=$(cat /sys/class/net/$slave2/statistics/tx_packets)

diff1=$(($tx_count_after_slave1 - $tx_count_before_slave1))
diff2=$(($tx_count_after_slave2 - $tx_count_before_slave2))

get_diff_percentage $diff1 $diff2
diff_percentage=$?
echo $diff_percentage

Actual results:
Packets should be equally distributed amongst slaves

Expected results:
Packets should be equally distributed ($diff_percentage in above script should be close ot zero)

Additional info:

Comment 2 Marcelo Ricardo Leitner 2017-08-08 19:54:18 UTC
Okay but what are your results?

Comment 3 Xin Long 2017-08-09 06:41:49 UTC
the description of loadbalance runner in active mode in man doc seems different from kernel's behavior.

Comment 4 Xin Long 2017-08-09 10:32:00 UTC
loadbalance  —  To  do  passive load balancing, runner only sets up BPF hash function which will determine port for packet transmit. To do active load balancing, runner moves hashes among available ports trying to reach perfect balance.

'hashes' generated according to "tx_hash": ["eth", "ipv4", "ipv6"...]
can you try with ping different daddr ? like:
ping 192.168.1.254
ping 192.168.1.253
ping 192.168.1.252
...

Comment 5 Xin Long 2017-09-05 03:45:18 UTC
The algorithm for tx_balancer is:
there are 256 slots in the hashtable, one slot is mapped to one interface, like:
hash 0x0   ---> eth1
hash 0x1   ---> eth1
...
hash 0xfc ---> eth2
hash 0xfd ---> eth2
hash 0xfe ---> eth1

when sending pkt:
================
get_hashkey by 'eth' or 'ipv4', then get the right interface from the hashtable on which it will send the pkt out.

when rebalance:
==============
This happened every 5 seconds by default or (runner.tx_balancer.balancing_interval), in which there is a loop where every time libteam gets the slot that has sent the most packet and choose the interface that has lowest loads, and remap the slot to this interface:

1. clear balance bytes for every interface(eth1.balance_bytes=0, eth2.balance_bytes=0)
2. get biggest_unprocessed_hash (like hash 0x10) and get least_loaded_port (eth1, 1st if both's balance_bytes are 0), then do map: "hash 0x10 ---> eth1", set this hash slot as processed, and set eth1.balance_bytes+=slot.bytes(in this 5 sec).
3. go to step 2, until no more unprocessed_hash.


So in your test case, there's only one target ip(one hash slot used), it usually only chooses one of these two interface, according to the algorithm above.

the rx_balance should work for the case like:
[a] ping -c1 192.168.10.253
[b] ping -c1 192.168.10.253
[c] ping -c1 192.168.10.252
[d] ping -c1 192.168.10.251

a and b will go to eth1
c and d will go to eth2

Hi Amit, can you pls check bonding's rx_balance behavior ? This probably is not a bug.

Comment 6 Xin Long 2017-09-06 05:15:52 UTC
Since it works as expected, close it, pls reopen if you have any question. Thanks.


Note You need to log in before you can comment on or make changes to this bug.