Regardless the slaved physical devices, bond always have 1000 as speed: vdsClient -s 0 getVdsStats | grep network network = {'bond4': {'macAddr': '', 'name': 'bond4', 'txDropped': '0', 'rxErrors': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'down', 'speed': '1000', 'rxDropped': '0'} ... Checking the upstream code: vdsm/sampling.py ... 439 ifrate = ifrate or 1000 ... According with the code, bond interfaces are always assumed to have 1GB of speed. Depending on the network arrangement on the Host, the txRate/rxRate related with the traffic goes to the bond interface. This will reflect in inaccurate network usage in Admin Portal graph. e.g. 1GB of txRate for a bond device in active-backup mode with two 10GB nics will cause 100% of network usage in Admin Portal graph. The correct is 10%. If bond mode is Link Aggregation, the correct would be 5%. Please report bond speed in a function of slaved nics and bond mode.
Verified in ovirt-engine-3.4.0-0.5.beta1.el6.noarch
We now report smarter speeds on Vdsm, but we should make sure that Engine makes proper use of them. Genady, would you verify that a bonded host with VMs consuming more than one nic's bandwidth is no longer reported as choked? Based on what Moti said in today's team meeting, I am a bit worried that the updated bond speed is not used by Engine. If it does not, please clone the bug to Engine, to have the work continued there.
(In reply to Dan Kenigsberg from comment #9) > We now report smarter speeds on Vdsm, but we should make sure that Engine > makes proper use of them. Genady, would you verify that a bonded host with > VMs consuming more than one nic's bandwidth is no longer reported as choked? > > Based on what Moti said in today's team meeting, I am a bit worried that the > updated bond speed is not used by Engine. If it does not, please clone the > bug to Engine, to have the work continued there. There is already an open Bug 980363 for the engine to reflect the actual speed as reported by vdsm for the non-physical devices. This bug should be closed, as from vdsm side - it was already verified. It seems that the customer's info should be moved to Bug 980363.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0504.html