Bug 1469503 - [DR] [RFE] Add policy for SPM election to auto select host with higher priority once it becomes operational
Summary: [DR] [RFE] Add policy for SPM election to auto select host with higher priori...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: RFEs
Version: future
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Rob Young
QA Contact: Gil Klein
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-11 12:04 UTC by Elad
Modified: 2017-11-26 12:41 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-11-26 12:41:28 UTC
oVirt Team: Storage
Embargoed:
amureini: ovirt-future?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description Elad 2017-07-11 12:04:44 UTC
Mainly for active-active disaster recovery purposes, there should be an SPM election policy that automatically chooses a host with higher SPM priority when it comes back up. 
This is needed for active-active DR scenarios since there can be situations when a host, that is located in the backup remote site and usually has higher latency and low bandwidth to the main site's storages, is elected as the SPM (for example, when the main site's hosts are in maintenance for upgrade). 

Of course, this auto SPM select should be done with time-frequency limitations to prevent SPM election storms.

Comment 1 Allon Mureinik 2017-07-12 08:04:52 UTC
Frankly, I don't think this is the right way to go. You can have all sorts of bad situations where a host comes up on one side of the cluster but the storage hasn't flipped over, etc.

IMHO, the way to go about this is to have something external (possibly even the admin himself/herself manually try to force the SPM back to the "right" side once he/she is convinced it's operational again.

Comment 2 Yaniv Lavi 2017-11-26 12:41:28 UTC
I prefer we wait to see if a customer requests something like this.
Closing for now.


Note You need to log in before you can comment on or make changes to this bug.