+++ This bug was initially created as a clone of Bug #882093 +++ In review process, one step is to check the real transfer rate. If it's much lower than the naming value, we will require vendor to explain the reason, and will ask vendor to run some manual tests. In cert#828100, v7 reports return value of bw_tcp is 400MBps: <snip> TCP latency using 10.0.0.10: 0.0940 microseconds testing bandwidth to 10.0.0.10... 0.065536 408.99 MB/sec </snip> While vendor reports manual test of bw_tcp returns about 2.1GMBps, and iperf returns 27GBps. For detail, please refer to https://hardware.redhat.com/show.cgi?id=828100#c45 <snip> [root@SA2260-X9DR3-LN4F ~]# bw_tcp -m 65520 -P 8 10.0.0.10 0.065520 2166.37 MB/sec </snip> Also, vendor provides manual step and config info in https://hardware.redhat.com/show.cgi?id=828100#c47 <snip> I did the following steps: - set CONNECTED_MODE=yes and MTU=65520 in /etc/sysconfig/network-scripts/ifcfg-ib0 (for both server and test system) - set the governor to performance in /etc/sysconfig/cpuspeed - bound the process qib_cq to a specific CPU core with 'taskset -pc 3 $(pidof qib_cq)' on both server and client </snip> Wondering could the process of network test be improved to reflect max real transfer rate? --- Additional comment from QinXie on 2012-11-30 00:16:03 EST --- check with vendor for the reason of lower transfer rate and waiting for vendor's manual result may cost a long time. If we can improve this, I think it's helpful to shorten to cert closure time. --- Additional comment from on 2012-11-30 11:11:21 EST --- I want to add some remarks: - To get a result near the maximum throughput it's best to have eight (8) parallel streams. I have also tested with lower and higher values but it figured out that the configuration based on eight streams shows the best performance. - The taskset command is most useful when the defined CPU core belongs to the CPU that is providing the PCIe lanes for the Infiniband adapter (in my case QLE7340). Best regards Marcus Wiedemann --- Additional comment from Caspar Zhang on 2012-12-19 11:50:42 EST --- *** Bug 882088 has been marked as a duplicate of this bug. ***
Created attachment 731981 [details] set measured bandwidth goal at 80% over multiple threads This patch improves the TCP Bandwidth test to check average measured speed against detected interface speed. An initial attempt is made with paralellism set to 2, and if the 80% goal is not met, it is re-tried for paralellism of 4 and 8. The patch will not fail the test on missed goals, it will only produce a warning message.
committed to R32
(In reply to comment #4) > The patch will not fail the test on missed goals, it will only produce a > warning message. After we're confident in these changes for consistency across the various network types and speeds can we go ahead and add a fail for <50%? Chris and Gary, can we safely put in 50% as the absolute min. performance for a network? Should we talk with Gospo, Ledford, and Linville first?
Using hwcert-client 1.7.0-38.el7 Subtest: TCP-bandwidth - tcp bandwidth test via lmbench testing bandwidth to gnichols.usersys.redhat.com bw_tcp -P 2 -m 1m gnichols.usersys.redhat.com 1.048576 113.06 MB/sec 1.048576 114.41 MB/sec 1.048576 112.09 MB/sec 1.048576 114.54 MB/sec 1.048576 114.65 MB/sec Average Bandwidth: 113.75 PASS