Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.

Bug 1255474

Summary: [RFE][SCALE] traffic shaping on ovirtmgmt interface
Product: [oVirt] ovirt-engine Reporter: Michal Skrivanek <michal.skrivanek>
Component: RFEsAssignee: nobody nobody <nobody>
Status: CLOSED WONTFIX QA Contact: Michael Burman <mburman>
Severity: high Docs Contact:
Priority: medium    
Version: ---CC: bugs, danken, gklein, lsurette, mburman, mgoldboi, mpoledni, myakove, rbalakri, sbonazzo, srevivo, ycui, ykaul, ylavi
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: ylavi: ovirt‑future?
mburman: testing_plan_complete+
ylavi: planning_ack?
ylavi: devel_ack?
ylavi: testing_ack?
Hardware: Unspecified   
OS: Unspecified   
URL: -
See Also: https://bugzilla.redhat.com/show_bug.cgi?id=1364145
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Feature: Define default QoS on the management networks. Reason: Heavy traffic over management network might cause the host to become not responsive. Result: The QoS would ensure that connection between the engine and the host is prioritized properly.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-06 03:41:16 EDT Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On: 1271094, 1340702, 1346318, 1348448    
Bug Blocks: 1252426, 1428232    

Description Michal Skrivanek 2015-08-20 12:45:00 EDT
Sharing the same interface for VM networks, management and migrations(and display, and storage) is discouraged, yet common. 
Overloading the single interface by e.g. mass migration or a peak in VM activity causes serious issues with engine-vdsm communication. We see timeouts, problems in monitoring, eventually causing non-responsiveness of the host which causes even worse issues. 

In order to keep management working we should employ traffic shaping to guarantee some bandwidth is always available to vdsm
Comment 2 Michal Skrivanek 2015-09-04 05:36:17 EDT
this is part of the overall migration improvement effort tracked by bug 1252426 (hence 4.0 timeframe)
Comment 3 Red Hat Bugzilla Rules Engine 2015-10-19 06:49:59 EDT
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Comment 5 Dan Kenigsberg 2015-12-09 09:58:12 EST
rhev-3.6 features host network QoS (bug 1043226). with it, customers can manually set their own capping on migration network. As far as I understand, this RFE tracks setting up a magical good policy by default.
Comment 6 Michal Skrivanek 2015-12-10 04:54:16 EST
(In reply to Dan Kenigsberg from comment #5)
> rhev-3.6 features host network QoS (bug 1043226). with it, customers can
> manually set their own capping on migration network. As far as I understand,
> this RFE tracks setting up a magical good policy by default.

yes, magic default QoS for management network to make sure heartbeats and essential communication works at all times
Comment 7 Dan Kenigsberg 2016-01-20 05:20:57 EST
Meni, could you define a mgmt network with QoS (link share, later abs limit) and then define another network with no QoS on the same nic via vdsm API (as Engine blocks this config). Then, repeat on another host, and stress-test the host-to-host communication with two concurrent iperfs (each per network).
Please report the throughput of each network for each QoS flavour.
Comment 8 Dan Kenigsberg 2016-01-27 04:21:37 EST
Assuming we can drop the Engine-side validation of QoS and non-QoS networks on the same NIC, this would be doable.
Comment 10 Dan Kenigsberg 2016-02-17 04:37:47 EST
Applying QoS on the management network by default may have serious impact on host CPU. We need to measure that before applying it to most users.
Gil, can we find the time to do that?
Comment 11 Dan Kenigsberg 2016-03-15 09:54:11 EDT
clearing needinfo, as mburman reports no distinguishable effect of performance. 

I suggest to set the default QoS on a network whenever it is assigned with the management role.

To properly implement this RFE, we'd need to set a QoS on each cluster's management network. Since the network is a DC entity (until http://www.ovirt.org/feature/remove-dc-entity-network/ is implemented ) setting its QoS on one cluster may put other clusters' hosts into out-of-sync.

This is a bit ugly, but may amended by the user clearing the default QoS or applying it on the other clusters as well.
Comment 12 Michael Burman 2016-06-15 08:19:06 EDT
This feature currently has no meaning. we depend on BZ - 1346318
Comment 13 Red Hat Bugzilla Rules Engine 2016-06-15 08:19:15 EDT
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Comment 14 Yaniv Lavi 2018-06-06 03:41:16 EDT
Closing old RFEs, please reopen if still needed.
Patches are always welcomed.