Bug 1479384 - team loadbalance runner in active mode does not work properly [NEEDINFO]
team loadbalance runner in active mode does not work properly
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libteam (Show other bugs)
Unspecified Unspecified
medium Severity low
: rc
: ---
Assigned To: Xin Long
Network QE
Depends On:
  Show dependency treegraph
Reported: 2017-08-08 09:17 EDT by Amit Supugade
Modified: 2017-09-06 01:15 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-09-06 01:15:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
mleitner: needinfo? (asupugad)

Attachments (Terms of Use)

  None (edit)
Description Amit Supugade 2017-08-08 09:17:27 EDT
Description of problem:
team loadbalance runner in active mode does not work properly

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

get_diff_percentage () {
        if (( $1 < $2 ))

        let "PERCENT=($MAX-$MIN)*100/$MAX"
        return $PERCENT

teamd -d -t team0 -c '{ "runner" : { "name" : "loadbalance", "tx_balancer" : { "name" : "basic" } }, "link_watch" : { "name" : "ethtool" } }'
teamdctl team0 port add $slave1
teamdctl team0 port add $slave2
ip link set team0 up
pkill dhclient; sleep 3; dhclient -v team0

tx_count_before_slave1=$(cat /sys/class/net/$slave1/statistics/tx_packets)
tx_count_before_slave2=$(cat /sys/class/net/$slave2/statistics/tx_packets)

ping -c 20 $target_ip

tx_count_after_slave1=$(cat /sys/class/net/$slave1/statistics/tx_packets)
tx_count_after_slave2=$(cat /sys/class/net/$slave2/statistics/tx_packets)

diff1=$(($tx_count_after_slave1 - $tx_count_before_slave1))
diff2=$(($tx_count_after_slave2 - $tx_count_before_slave2))

get_diff_percentage $diff1 $diff2
echo $diff_percentage

Actual results:
Packets should be equally distributed amongst slaves

Expected results:
Packets should be equally distributed ($diff_percentage in above script should be close ot zero)

Additional info:
Comment 2 Marcelo Ricardo Leitner 2017-08-08 15:54:18 EDT
Okay but what are your results?
Comment 3 Xin Long 2017-08-09 02:41:49 EDT
the description of loadbalance runner in active mode in man doc seems different from kernel's behavior.
Comment 4 Xin Long 2017-08-09 06:32:00 EDT
loadbalance  —  To  do  passive load balancing, runner only sets up BPF hash function which will determine port for packet transmit. To do active load balancing, runner moves hashes among available ports trying to reach perfect balance.

'hashes' generated according to "tx_hash": ["eth", "ipv4", "ipv6"...]
can you try with ping different daddr ? like:
Comment 5 Xin Long 2017-09-04 23:45:18 EDT
The algorithm for tx_balancer is:
there are 256 slots in the hashtable, one slot is mapped to one interface, like:
hash 0x0   ---> eth1
hash 0x1   ---> eth1
hash 0xfc ---> eth2
hash 0xfd ---> eth2
hash 0xfe ---> eth1

when sending pkt:
get_hashkey by 'eth' or 'ipv4', then get the right interface from the hashtable on which it will send the pkt out.

when rebalance:
This happened every 5 seconds by default or (runner.tx_balancer.balancing_interval), in which there is a loop where every time libteam gets the slot that has sent the most packet and choose the interface that has lowest loads, and remap the slot to this interface:

1. clear balance bytes for every interface(eth1.balance_bytes=0, eth2.balance_bytes=0)
2. get biggest_unprocessed_hash (like hash 0x10) and get least_loaded_port (eth1, 1st if both's balance_bytes are 0), then do map: "hash 0x10 ---> eth1", set this hash slot as processed, and set eth1.balance_bytes+=slot.bytes(in this 5 sec).
3. go to step 2, until no more unprocessed_hash.

So in your test case, there's only one target ip(one hash slot used), it usually only chooses one of these two interface, according to the algorithm above.

the rx_balance should work for the case like:
[a] ping -c1
[b] ping -c1
[c] ping -c1
[d] ping -c1

a and b will go to eth1
c and d will go to eth2

Hi Amit, can you pls check bonding's rx_balance behavior ? This probably is not a bug.
Comment 6 Xin Long 2017-09-06 01:15:52 EDT
Since it works as expected, close it, pls reopen if you have any question. Thanks.

Note You need to log in before you can comment on or make changes to this bug.