Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1114253 - PRD35 - [RFE] Allow to perform fence operations from a host in another DC
PRD35 - [RFE] Allow to perform fence operations from a host in another DC
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.4.0
x86_64 Linux
unspecified Severity urgent
: ---
: 3.5.0
Assigned To: Eli Mesika
sefi litmanovich
infra
: FutureFeature
Depends On: 1054778 1090803 1131411
Blocks: rhev3.5beta 1156165
  Show dependency treegraph
 
Reported: 2014-06-29 05:00 EDT by Oved Ourfali
Modified: 2016-02-10 14:02 EST (History)
14 users (show)

See Also:
Fixed In Version: vt1.3
Doc Type: Enhancement
Doc Text:
Previously, a host performing a fencing operation had to be in the same data center as the host being fenced. Now, a host can be fenced by a host from a different data center.
Story Points: ---
Clone Of: 1054778
Environment:
Last Closed: 2015-02-11 13:05:02 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Infra
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 26513 None None None Never
Red Hat Product Errata RHSA-2015:0158 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.5.0 2015-02-11 17:38:50 EST

  None (edit)
Description Oved Ourfali 2014-06-29 05:00:08 EDT
+++ This bug was initially created as a clone of Bug #1054778 +++

Description of problem:

When you shut down a host in a data center with no other host, you are
unable to start it using the configured power management in ovirt.
Version-Release number of selected component (if applicable):

ovirt-engine 3.3.2 on EL 6

How reproducible:
shut down a host in a datacenter with a single host (e.g. "init 0" on a shell)

Steps to Reproduce:
1. shut down a host e.g. in a local storage DC
2. the host becomes "non responsive"
3. try to start the host via the configured powermanagement

Actual results:

Error while executing action:

hostname:

    There is no other Host in the Data Center that can be used to test the Power Management settings.

Expected results:

the hosts starts, there is no test of power management necessary.

Additional info:

no other action helps to circumvent this test:
put host in maintenance, confirm manually host has been rebooted.

you end up with the failing power management test (why should it test it with
another host in the first place?)

Related Bug: BZ1053434

--- Additional comment from Itamar Heim on 2014-01-17 12:56:07 EST ---

my first instinct was this was similar to bug 837539, but its not.

the engine doesn't perform fence operations from the engine rather from another host in the cluster/dc, hence needs "another running host" (by asking vdsm on the other host to call the fence script)

eli, maybe until we can do this from engine, we can allow doing this from a host not in same DC?
(wouldn't work for an engine with really only a single host, but for most use cases should be good enough?)

--- Additional comment from Eli Mesika on 2014-01-26 10:30:23 EST ---

(In reply to Itamar Heim from comment #1)

> eli, maybe until we can do this from engine, we can allow doing this from a
> host not in same DC?
> (wouldn't work for an engine with really only a single host, but for most
> use cases should be good enough?)

Yes, we have now the pm_proxy_preferences field that is set by default to "cluster,DC" maybe we can support that by adding "other" such that in hosts that have this value set to "cluster,DC,other" we will search for proxy in other DCs

--- Additional comment from Itamar Heim on 2014-02-13 13:31:06 EST ---

pushing to target release 3.5, assuming its not planned for 3.4 at this point...

--- Additional comment from Eli Mesika on 2014-04-07 16:19:18 EDT ---

We will address for 3.5 only the option to look for proxy outside the DC where the host is located and try to use other DCs

This will be done by adding to the pm_proxy_preferences field which is defaulted now to "cluster,DC" another option named otherDC.
(The pm_proxy_preferences value is available via the UI Host New/Edit PM TAB in the field named "source" , in the API it is under <pm_proxies>)

The default will stay "cluster,DC" and the admin can change this value per host using the API
Comment 1 Sven Kieske 2014-11-04 02:46:21 EST
This doc text does not make any sense and is confusing imho:

"We currently limit the host that does the fencing operation to be on the same Dc as the fenced host, although a host in another DC can also do that."

The sentence contradicts itself.
Comment 2 Oved Ourfali 2014-11-04 02:50:20 EST
(In reply to Sven Kieske from comment #1)
> This doc text does not make any sense and is confusing imho:
> 
> "We currently limit the host that does the fencing operation to be on the
> same Dc as the fenced host, although a host in another DC can also do that."
> 
> The sentence contradicts itself.

Where is the contradiction?
I wrote that the "reason" for the feature is that we limit the host to be in the same DC, while other DC hosts can do that.
And the result is that we now allow to use hosts from another DC as well.
Comment 3 Sven Kieske 2014-11-04 03:04:34 EST
(In reply to Oved Ourfali from comment #2)
> (In reply to Sven Kieske from comment #1)
> > This doc text does not make any sense and is confusing imho:
> > 
> > "We currently limit the host that does the fencing operation to be on the
> > same Dc as the fenced host, although a host in another DC can also do that."
> > 
> > The sentence contradicts itself.
> 
> Where is the contradiction?
> I wrote that the "reason" for the feature is that we limit the host to be in
> the same DC, while other DC hosts can do that.
> And the result is that we now allow to use hosts from another DC as well.

Your spelling is misleading imho, you do not limit this anymore so imho the wording should be:
"we limitED the host[..]"

but this are just my 2 cents, feel free to keep your wording.
I'm also no native english speaker, so I might be wrong.
Comment 4 Oved Ourfali 2014-11-04 03:06:53 EST
(In reply to Sven Kieske from comment #3)
> (In reply to Oved Ourfali from comment #2)
> > (In reply to Sven Kieske from comment #1)
> > > This doc text does not make any sense and is confusing imho:
> > > 
> > > "We currently limit the host that does the fencing operation to be on the
> > > same Dc as the fenced host, although a host in another DC can also do that."
> > > 
> > > The sentence contradicts itself.
> > 
> > Where is the contradiction?
> > I wrote that the "reason" for the feature is that we limit the host to be in
> > the same DC, while other DC hosts can do that.
> > And the result is that we now allow to use hosts from another DC as well.
> 
> Your spelling is misleading imho, you do not limit this anymore so imho the
> wording should be:
> "we limitED the host[..]"
> 
> but this are just my 2 cents, feel free to keep your wording.
> I'm also no native english speaker, so I might be wrong.

I see. Fixed.
Comment 6 errata-xmlrpc 2015-02-11 13:05:02 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0158.html

Note You need to log in before you can comment on or make changes to this bug.