Bug 1255474 - [RFE][SCALE] traffic shaping on ovirtmgmt interface
[RFE][SCALE] traffic shaping on ovirtmgmt interface
Status: NEW
Product: ovirt-engine
Classification: oVirt
Component: RFEs (Show other bugs)
---
Unspecified Unspecified
medium Severity high (vote)
: ovirt-4.3.0
: ---
Assigned To: nobody nobody
Michael Burman
-
: FutureFeature
Depends On: 1346318 1271094 1340702 1348448
Blocks: migration_improvements 1428232
  Show dependency treegraph
 
Reported: 2015-08-20 12:45 EDT by Michal Skrivanek
Modified: 2017-09-28 04:19 EDT (History)
15 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Define default QoS on the management networks. Reason: Heavy traffic over management network might cause the host to become not responsive. Result: The QoS would ensure that connection between the engine and the host is prioritized properly.
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Network
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ylavi: ovirt‑4.3?
mburman: testing_plan_complete+
ylavi: planning_ack?
ylavi: devel_ack?
ylavi: testing_ack?


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 56429 master MERGED engine: Create default network QoS for ovirtmgmt upon DC creation 2016-04-26 09:27 EDT

  None (edit)
Description Michal Skrivanek 2015-08-20 12:45:00 EDT
Sharing the same interface for VM networks, management and migrations(and display, and storage) is discouraged, yet common. 
Overloading the single interface by e.g. mass migration or a peak in VM activity causes serious issues with engine-vdsm communication. We see timeouts, problems in monitoring, eventually causing non-responsiveness of the host which causes even worse issues. 

In order to keep management working we should employ traffic shaping to guarantee some bandwidth is always available to vdsm
Comment 2 Michal Skrivanek 2015-09-04 05:36:17 EDT
this is part of the overall migration improvement effort tracked by bug 1252426 (hence 4.0 timeframe)
Comment 3 Red Hat Bugzilla Rules Engine 2015-10-19 06:49:59 EDT
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Comment 5 Dan Kenigsberg 2015-12-09 09:58:12 EST
rhev-3.6 features host network QoS (bug 1043226). with it, customers can manually set their own capping on migration network. As far as I understand, this RFE tracks setting up a magical good policy by default.
Comment 6 Michal Skrivanek 2015-12-10 04:54:16 EST
(In reply to Dan Kenigsberg from comment #5)
> rhev-3.6 features host network QoS (bug 1043226). with it, customers can
> manually set their own capping on migration network. As far as I understand,
> this RFE tracks setting up a magical good policy by default.

yes, magic default QoS for management network to make sure heartbeats and essential communication works at all times
Comment 7 Dan Kenigsberg 2016-01-20 05:20:57 EST
Meni, could you define a mgmt network with QoS (link share, later abs limit) and then define another network with no QoS on the same nic via vdsm API (as Engine blocks this config). Then, repeat on another host, and stress-test the host-to-host communication with two concurrent iperfs (each per network).
Please report the throughput of each network for each QoS flavour.
Comment 8 Dan Kenigsberg 2016-01-27 04:21:37 EST
Assuming we can drop the Engine-side validation of QoS and non-QoS networks on the same NIC, this would be doable.
Comment 10 Dan Kenigsberg 2016-02-17 04:37:47 EST
Applying QoS on the management network by default may have serious impact on host CPU. We need to measure that before applying it to most users.
Gil, can we find the time to do that?
Comment 11 Dan Kenigsberg 2016-03-15 09:54:11 EDT
clearing needinfo, as mburman reports no distinguishable effect of performance. 

I suggest to set the default QoS on a network whenever it is assigned with the management role.

To properly implement this RFE, we'd need to set a QoS on each cluster's management network. Since the network is a DC entity (until http://www.ovirt.org/feature/remove-dc-entity-network/ is implemented ) setting its QoS on one cluster may put other clusters' hosts into out-of-sync.

This is a bit ugly, but may amended by the user clearing the default QoS or applying it on the other clusters as well.
Comment 12 Michael Burman 2016-06-15 08:19:06 EDT
This feature currently has no meaning. we depend on BZ - 1346318
Comment 13 Red Hat Bugzilla Rules Engine 2016-06-15 08:19:15 EDT
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Note You need to log in before you can comment on or make changes to this bug.