Bug 1242422 - [RFE] [HA] Overcloud: HA: Integrate automatic fencing configuration with director for automatic deployment and upgrades
Summary: [RFE] [HA] Overcloud: HA: Integrate automatic fencing configuration with dire...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-common
Version: Director
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: beta
: 11.0 (Ocata)
Assignee: Chris Jones
QA Contact: Asaf Hirshberg
URL:
Whiteboard:
: 1194301 1340941 (view as bug list)
Depends On: 1429892
Blocks: 1194301 1247019 1264181 1336839 1340941 1361252 1426066 1426481 1427515 1558241
TreeView+ depends on / blocked
 
Reported: 2015-07-13 10:09 UTC by Leonid Natapov
Modified: 2020-04-15 14:14 UTC (History)
28 users (show)

Fixed In Version: openstack-tripleo-common-6.0.0-5.el7ost python-tripleoclient-6.1.0-3.el7ost
Doc Type: Enhancement
Doc Text:
Automatic fencing setup can be used in director for easier High Availability deployments and upgrades. To benefit from the new feature, use the 'overcloud generate fencing' command.
Clone Of:
: 1558241 (view as bug list)
Environment:
Last Closed: 2017-05-17 19:23:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 444419 0 None None None 2017-03-10 17:16:36 UTC
Launchpad 1649695 0 None None None 2017-03-02 09:27:58 UTC
Launchpad 1664568 0 None None None 2017-03-02 09:28:34 UTC
Launchpad 1670687 0 None None None 2017-03-07 14:04:55 UTC
OpenStack gerrit 436965 0 None MERGED Fix fencing action parameter name. 2020-10-08 13:04:13 UTC
OpenStack gerrit 445570 0 None MERGED Handle unprovisioned Ironic nodes in fencing parameter generator. 2020-10-08 13:04:13 UTC
OpenStack gerrit 446184 0 None MERGED Make fencing action parameter optional. 2020-10-08 13:04:13 UTC
OpenStack gerrit 450251 0 None MERGED Make fencing action parameter optional. 2020-10-08 13:04:13 UTC
Red Hat Product Errata RHEA-2017:1245 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 11.0 Bug Fix and Enhancement Advisory 2017-05-17 23:01:50 UTC

Description Leonid Natapov 2015-07-13 10:09:50 UTC
overcloud deployment misses automatic fencing configuration.
user have to  configure fencing manually after deployment is finished.

Comment 3 Michele Baldessari 2015-12-10 14:48:12 UTC
So this is doable today with liberty and ospd. What you need to do is create
a yaml file like this:
parameters:
    EnableFencing: true
    FencingConfig:
        {
            "devices": [
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:e3:82", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.102", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }, 
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:d6:da", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.100", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }, 
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:dc:da", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.94", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }, 
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:eb:ea", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.107", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }, 
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:ee:2a", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.105", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }, 
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:f1:b2", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.104", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }, 
                {
                    "agent": "fence_ipmilan", 
                    "host_mac": "b8:ca:3a:66:ef:5a", 
                    "params": {
                        "delay": "20", 
                        "ipaddr": "10.1.8.106", 
                        "lanplus": "true", 
                        "login": "qe-scale", 
                        "passwd": "******"
                    }
                }
            ]
        }


Then you just need to pass -e fencing.yaml and after the deploy you will get
one stonith device per host.

In case of fence_xvm you can use something like the following:
parameters:
    EnableFencing: true
    FencingConfig:
        {
            "devices": [
                {
                    "agent": "fence_xvm", 
                    "host_mac": "52:54:00:2d:bb:38", 
                    "params": {
                        "multicast_address": "225.0.0.12", 
                        "port": "osp8-node1"
                    }
                }, 
                {
                    "agent": "fence_xvm", 
                    "host_mac": "52:54:00:e9:f4:a8", 
                    "params": {
                        "multicast_address": "225.0.0.12", 
                        "port": "osp8-node2"
                    }
                }, 
                {
                    "agent": "fence_xvm", 
                    "host_mac": "52:54:00:9f:9f:3f", 
                    "params": {
                        "multicast_address": "225.0.0.12", 
                        "port": "osp8-node3"
                    }
                }, 
                {
                    "agent": "fence_xvm", 
                    "host_mac": "52:54:00:55:02:bb", 
                    "params": {
                        "multicast_address": "225.0.0.12", 
                        "port": "osp8-node4"
                    }
                }, 
                {
                    "agent": "fence_xvm", 
                    "host_mac": "52:54:00:8e:f9:36", 
                    "params": {
                        "multicast_address": "225.0.0.12", 
                        "port": "osp8-node5"
                    }
                }
            ]
        }


Of course in this case the mapping macaddress-vm is very specific to my setup.

I have started some work to output the yaml above automatically via a tripleo
command called "openstack baremetal fencing export foo.yaml" here:
https://github.com/mbaldessari/python-tripleoclient/tree/wip-fencing-support

Comment 6 Hugh Brock 2016-04-04 11:30:18 UTC
No, automated fence agent configuration will not make RHEL OSP 8.

Comment 7 Mike Burns 2016-04-07 20:43:53 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 14 Asaf Hirshberg 2016-12-07 05:26:53 UTC
what about constraints? will the yml file configure them so the fence resource will not be managed on the same controller they applied for? like:
 stonith-overcloud-controller-0	(stonith:fence_ipmilan):	Started overcloud-controller-0

Comment 16 Fabio Massimo Di Nitto 2017-01-11 04:47:58 UTC
*** Bug 1194301 has been marked as a duplicate of this bug. ***

Comment 17 Fabio Massimo Di Nitto 2017-01-11 04:49:29 UTC
*** Bug 1340941 has been marked as a duplicate of this bug. ***

Comment 20 Mike Burns 2017-02-24 18:12:47 UTC
Can you please link the upstream patches in external trackers and update the component appropriately?

Thanks

Comment 23 Chris Jones 2017-04-25 08:55:41 UTC
@Lukas: I'd probably replace "for automatic High Availablity" with "for easier High Availability" - I think the current wording suggests that this configuration generator somehow does magic for automating all of HA.

Comment 25 Asaf Hirshberg 2017-04-26 11:28:08 UTC
VERIFIED.

Comment 27 errata-xmlrpc 2017-05-17 19:23:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1245


Note You need to log in before you can comment on or make changes to this bug.