Bug 1301204 - Some stonith resource changes require "pcs resource"
Some stonith resource changes require "pcs resource"
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs (Show other bugs)
7.2
x86_64 Linux
medium Severity low
: rc
: 7.3
Assigned To: Tomas Jelinek
cluster-qe@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-22 16:08 EST by Ken Gaillot
Modified: 2017-07-21 07:05 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1240330
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ken Gaillot 2016-01-22 16:08:04 EST
+++ This bug was initially created as a clone of Bug #1240330 +++

--- Additional comment from David Vossel on 2015-07-06 12:15:04 EDT ---

Also, i'm surprised it's possible to disable a stonith device using pcs. There are no commmands under 'pcs stonith' capable of disabling a fencing device. Someone would have to use 'pcs resource disable' which isn't stonith specific. This may work, but I'm not sure this is supported.

--- Additional comment from Andrew Beekhof on 2015-07-15 19:06:51 EDT ---

Disable is supposed to mark it as unusable for fencing.

+++

As seen by the above exchange on another bug, there is some confusion among users and even developers as to when "pcs resource" should be used with stonith resources. (The original bug is otherwise irrelevant to this one.)

I recommend we either (1) make command aliases so that "pcs stonith" can be used for anything allowed with stonith resources (e.g. "pcs stonith disable"); or (2) update the pcs man page (and any other relevant documentation) to say what "pcs resource" commands are appropriate to use with stonith resources.
Comment 3 Tomas Jelinek 2016-10-19 08:13:57 EDT
From the original bz1240330 comment 20 we can see that pacemaker supports disabling stonith resources as well as using constraints on stonith resources.
Comment 4 Ivan Devat 2017-01-03 10:51:50 EST
Here is another case where the stonith acts as a common resource.
With "pcs stonith create" there is currently possible to use flags --master and --group. This is not in the documentation but it works the same way as in "pcs resource create".

Note that the flag --clone (unlike flag --master) have no effect in "pcs stonith create".

There are two possibilities: to document these flags (if it makes sense) or remove this behavior (if it does not make sense).

Does it make sense to create stonith in group or as a master?
Comment 5 Ken Gaillot 2017-01-03 16:34:17 EST
(In reply to Ivan Devat from comment #4)
> Here is another case where the stonith acts as a common resource.
> With "pcs stonith create" there is currently possible to use flags --master
> and --group. This is not in the documentation but it works the same way as
> in "pcs resource create".
> 
> Note that the flag --clone (unlike flag --master) have no effect in "pcs
> stonith create".
> 
> There are two possibilities: to document these flags (if it makes sense) or
> remove this behavior (if it does not make sense).
> 
> Does it make sense to create stonith in group or as a master?

I don't think there's anything in Pacemaker preventing that, but it's probably a bad idea.

There has been some discussion of cloning stonith devices -- for example, see Bug 1250314 and this mailing list thread:

  http://oss.clusterlabs.org/pipermail/pacemaker/2014-July/022157.html

but the conclusion has been that cloning a stonith resource is probably an undesirable model. A master/slave clone would be even less desirable. I think we would be fine removing the flags.
Comment 6 Moullé Alain 2017-01-04 04:39:33 EST
Hi
Just a thought about "group of stonith resources": maybe it could be interesting to configure a group of stonith resources when using a configuration of two stonith resources for one node: 1 first stonith resource with agent fence_ipmilan (with diag action) and a 2nd stonith resource with agent fence_kdump, both at level1 and a fence_ipmilan (action reboot) at level2. As the three stonith resources are targeted to the same IP target and equipment, if one is Failed the two other should also be Failed at the same time, so it could be interesting to make a group of these three stonith resources just to easily locate them on the same node (I mean : ordering inside the group is not really interesting in this case, but it also does not matter).
This would be nice to group the three stonith resources for one node, for location constraint, and for the display order on pcs status. 
Just check the case 01539908, the Message of :
Ruemker, John on May 12 2016 at 03:58 PM +02:00
about this specific stonith configuration.

Alain Moullé
Comment 7 Ken Gaillot 2017-01-04 18:46:08 EST
(In reply to Moullé Alain from comment #6)
> Hi
> Just a thought about "group of stonith resources": maybe it could be
> interesting to configure a group of stonith resources when using a
> configuration of two stonith resources for one node: 1 first stonith
> resource with agent fence_ipmilan (with diag action) and a 2nd stonith
> resource with agent fence_kdump, both at level1 and a fence_ipmilan (action
> reboot) at level2. As the three stonith resources are targeted to the same
> IP target and equipment, if one is Failed the two other should also be
> Failed at the same time, so it could be interesting to make a group of these
> three stonith resources just to easily locate them on the same node (I mean
> : ordering inside the group is not really interesting in this case, but it
> also does not matter).
> This would be nice to group the three stonith resources for one node, for
> location constraint, and for the display order on pcs status. 
> Just check the case 01539908, the Message of :
> Ruemker, John on May 12 2016 at 03:58 PM +02:00
> about this specific stonith configuration.
> 
> Alain Moullé

That is a good use case. We can keep --group and get rid of --clone/--master. I've never tried such a configuration, so we should test to make sure it behaves as expected.

On a related note, I suspect it's a bad idea to group a stonith resource with non-stonith resources. I'd have to investigate that to be sure.
Comment 8 Moullé Alain 2017-01-05 05:25:34 EST
Hi,
I fully agree that a group with stonith and any "standard" resource would be meaningless.
Thanks
Regards
Alain Moullé
Comment 9 Klaus Wenninger 2017-01-05 05:45:10 EST
Haven't tried it but thinking of more or less preferable ways of fencing
one could think of cases where a group even without a non-fencing-resource
might make sense - ordered then:
You might have some way of fencing that works in any case but has some
drawbacks - whatever they might be.
And you have a second way of fencing that needs certain preconditions
to be available - but if they are it would be preferable.
So these preconditions might be resources. Imagine an openvpn-tunnel to the
admin-network the physical stonith-device is in.
Comment 10 Ken Gaillot 2017-01-05 09:37:56 EST
(In reply to Klaus Wenninger from comment #9)
> Haven't tried it but thinking of more or less preferable ways of fencing
> one could think of cases where a group even without a non-fencing-resource
> might make sense - ordered then:
> You might have some way of fencing that works in any case but has some
> drawbacks - whatever they might be.
> And you have a second way of fencing that needs certain preconditions
> to be available - but if they are it would be preferable.
> So these preconditions might be resources. Imagine an openvpn-tunnel to the
> admin-network the physical stonith-device is in.

While that's a theoretical possibility, I'm pretty sure pacemaker has built-in logic to always start stonith devices first (which makes sense), and I don't know how putting one in a group after a non-stonith resource would interact with that.

I don't think it's a strong use case, because if something goes wrong with the depended-on non-stonith resource. there may be no way to recover.
Comment 11 Klaus Wenninger 2017-01-05 09:51:24 EST
(In reply to Ken Gaillot from comment #10)
> (In reply to Klaus Wenninger from comment #9)
> > Haven't tried it but thinking of more or less preferable ways of fencing
> > one could think of cases where a group even without a non-fencing-resource
> > might make sense - ordered then:
> > You might have some way of fencing that works in any case but has some
> > drawbacks - whatever they might be.
> > And you have a second way of fencing that needs certain preconditions
> > to be available - but if they are it would be preferable.
> > So these preconditions might be resources. Imagine an openvpn-tunnel to the
> > admin-network the physical stonith-device is in.
> 
> While that's a theoretical possibility, I'm pretty sure pacemaker has
> built-in logic to always start stonith devices first (which makes sense),
> and I don't know how putting one in a group after a non-stonith resource
> would interact with that.

Maybe there is that logic meanwhile - at least startup usually looks as if.
That's why I said I haven't tried it. Maybe the logic is not that hard.
At least I have a vague memory from very far in the past that there were
cases where a plain resource came up before completion of the stonith-device
start.

> 
> I don't think it's a strong use case, because if something goes wrong with
> the depended-on non-stonith resource. there may be no way to recover.

Idea was that the fallback stonith-device would be used to recover if
the node becomes unclean because of that resource. Might be watchdog
fencing as well here as fallback.

Note You need to log in before you can comment on or make changes to this bug.