Bug 1255474

Summary: [RFE][SCALE] traffic shaping on ovirtmgmt interface
Product: [oVirt] ovirt-engine Reporter: Michal Skrivanek <michal.skrivanek>
Component: RFEsAssignee: Nobody <nobody>
Status: CLOSED WONTFIX QA Contact: Michael Burman <mburman>
Severity: high Docs Contact:
Priority: medium    
Version: ---CC: bugs, danken, lsurette, mburman, mgoldboi, mpoledni, myakove, rbalakri, sbonazzo, srevivo, ycui, ykaul, ylavi
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: ylavi: ovirt-future?
mburman: testing_plan_complete+
ylavi: planning_ack?
ylavi: devel_ack?
ylavi: testing_ack?
Hardware: Unspecified   
OS: Unspecified   
URL: -
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Feature: Define default QoS on the management networks. Reason: Heavy traffic over management network might cause the host to become not responsive. Result: The QoS would ensure that connection between the engine and the host is prioritized properly.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-06 07:41:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1271094, 1340702, 1346318, 1348448    
Bug Blocks: 1252426, 1428232    

Description Michal Skrivanek 2015-08-20 16:45:00 UTC
Sharing the same interface for VM networks, management and migrations(and display, and storage) is discouraged, yet common. 
Overloading the single interface by e.g. mass migration or a peak in VM activity causes serious issues with engine-vdsm communication. We see timeouts, problems in monitoring, eventually causing non-responsiveness of the host which causes even worse issues. 

In order to keep management working we should employ traffic shaping to guarantee some bandwidth is always available to vdsm

Comment 2 Michal Skrivanek 2015-09-04 09:36:17 UTC
this is part of the overall migration improvement effort tracked by bug 1252426 (hence 4.0 timeframe)

Comment 3 Red Hat Bugzilla Rules Engine 2015-10-19 10:49:59 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 5 Dan Kenigsberg 2015-12-09 14:58:12 UTC
rhev-3.6 features host network QoS (bug 1043226). with it, customers can manually set their own capping on migration network. As far as I understand, this RFE tracks setting up a magical good policy by default.

Comment 6 Michal Skrivanek 2015-12-10 09:54:16 UTC
(In reply to Dan Kenigsberg from comment #5)
> rhev-3.6 features host network QoS (bug 1043226). with it, customers can
> manually set their own capping on migration network. As far as I understand,
> this RFE tracks setting up a magical good policy by default.

yes, magic default QoS for management network to make sure heartbeats and essential communication works at all times

Comment 7 Dan Kenigsberg 2016-01-20 10:20:57 UTC
Meni, could you define a mgmt network with QoS (link share, later abs limit) and then define another network with no QoS on the same nic via vdsm API (as Engine blocks this config). Then, repeat on another host, and stress-test the host-to-host communication with two concurrent iperfs (each per network).
Please report the throughput of each network for each QoS flavour.

Comment 8 Dan Kenigsberg 2016-01-27 09:21:37 UTC
Assuming we can drop the Engine-side validation of QoS and non-QoS networks on the same NIC, this would be doable.

Comment 10 Dan Kenigsberg 2016-02-17 09:37:47 UTC
Applying QoS on the management network by default may have serious impact on host CPU. We need to measure that before applying it to most users.
Gil, can we find the time to do that?

Comment 11 Dan Kenigsberg 2016-03-15 13:54:11 UTC
clearing needinfo, as mburman reports no distinguishable effect of performance. 

I suggest to set the default QoS on a network whenever it is assigned with the management role.

To properly implement this RFE, we'd need to set a QoS on each cluster's management network. Since the network is a DC entity (until http://www.ovirt.org/feature/remove-dc-entity-network/ is implemented ) setting its QoS on one cluster may put other clusters' hosts into out-of-sync.

This is a bit ugly, but may amended by the user clearing the default QoS or applying it on the other clusters as well.

Comment 12 Michael Burman 2016-06-15 12:19:06 UTC
This feature currently has no meaning. we depend on BZ - 1346318

Comment 13 Red Hat Bugzilla Rules Engine 2016-06-15 12:19:15 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 14 Yaniv Lavi 2018-06-06 07:41:16 UTC
Closing old RFEs, please reopen if still needed.
Patches are always welcomed.