Bug 1634048 - Unable to enable SSL on overcloud after initial deployment
Summary: Unable to enable SSL on overcloud after initial deployment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-pacemaker
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 14.0 (Rocky)
Assignee: Michele Baldessari
QA Contact: pkomarov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-28 14:43 UTC by Marius Cornea
Modified: 2019-05-08 08:39 UTC (History)
20 users (show)

Fixed In Version: puppet-pacemaker-0.7.2-0.20181008172519.9a4bc2d.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-11 11:53:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1797542 0 None None None 2018-10-12 10:28:48 UTC
OpenStack gerrit 609998 0 'None' MERGED Improve deep_compare detection 2020-12-15 14:19:21 UTC
Red Hat Product Errata RHEA-2019:0045 0 None None None 2019-01-11 11:53:28 UTC

Description Marius Cornea 2018-09-28 14:43:41 UTC
Description of problem:

I am not able to enable SSL on overcloud after initial deployment. While re-running the overcloud deploy command with the environment files used to enable SSL on the public endpoints, the overcloud stack update gets stuck and times out.

Checking the controller when running pcs status we can notice:

[root@controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: controller-0 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Fri Sep 28 14:40:30 2018
Last change: Fri Sep 28 02:04:49 2018 by root via crm_resource on controller-0

12 nodes configured
37 resources configured

Online: [ controller-0 controller-1 controller-2 ]
GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]

Full list of resources:

 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started controller-0
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started controller-1
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started controller-2
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master controller-0
   galera-bundle-1	(ocf::heartbeat:galera):	Master controller-1
   galera-bundle-2	(ocf::heartbeat:galera):	Master controller-2
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Master controller-0
   redis-bundle-1	(ocf::heartbeat:redis):	Slave controller-1
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-2
 ip-192.168.24.17	(ocf::heartbeat:IPaddr2):	Stopped
 ip-10.0.0.106	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.1.11	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.1.12	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.3.14	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.4.10	(ocf::heartbeat:IPaddr2):	Stopped
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Stopped
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Stopped
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Stopped
 Docker container: openstack-cinder-volume [192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest]
   openstack-cinder-volume-docker-0	(ocf::heartbeat:docker):	Started controller-0

Failed Actions:
* haproxy-bundle-docker-0_start_0 on controller-1 'unknown error' (1): call=137, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:04:59 2018', queued=0ms, exec=1654ms
* haproxy-bundle-docker-1_start_0 on controller-1 'unknown error' (1): call=133, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:04:53 2018', queued=0ms, exec=1737ms
* haproxy-bundle-docker-2_start_0 on controller-1 'unknown error' (1): call=149, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:05:12 2018', queued=0ms, exec=1650ms
* haproxy-bundle-docker-0_start_0 on controller-0 'unknown error' (1): call=135, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:04:53 2018', queued=0ms, exec=1716ms
* haproxy-bundle-docker-1_start_0 on controller-0 'unknown error' (1): call=139, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:04:59 2018', queued=0ms, exec=1714ms
* haproxy-bundle-docker-2_start_0 on controller-0 'unknown error' (1): call=141, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:05:06 2018', queued=0ms, exec=1617ms
* haproxy-bundle-docker-0_start_0 on controller-2 'unknown error' (1): call=145, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:05:06 2018', queued=0ms, exec=1579ms
* haproxy-bundle-docker-1_start_0 on controller-2 'unknown error' (1): call=149, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:05:12 2018', queued=0ms, exec=1565ms
* haproxy-bundle-docker-2_start_0 on controller-2 'unknown error' (1): call=133, status=complete, exitreason='Newly created docker container exited after start',
    last-rc-change='Fri Sep 28 02:04:53 2018', queued=0ms, exec=1568ms


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled


Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-9.0.0-0.20180919080941.0rc1.0rc1.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Deploy overcloud:

openstack overcloud deploy \
--timeout 100 \
--templates /usr/share/openstack-tripleo-heat-templates \
--stack overcloud \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-e /home/stack/virt/internal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/hostnames.yml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml \
-e /home/stack/virt/docker-images.yaml \

2. Enable SSL:

openstack overcloud deploy \
--timeout 100 \
--templates /usr/share/openstack-tripleo-heat-templates \
--stack overcloud \
--libvirt-type kvm \
--ntp-server clock.redhat.com \
-e /home/stack/virt/internal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/hostnames.yml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml \
-e /home/stack/virt/docker-images.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml \
-e ~/enable-tls.yaml \
-e ~/inject-trust-anchor.yaml \
-e ~/public_vip.yaml \


Actual results:
Times out

Expected results:
Overcloud public endpoints get SSL enabled.

Additional info:
Attaching the SSL environment files:

(undercloud) [stack@undercloud-0 ~]$ cat enable-tls.yaml 
# ********************************************************************************
# DEPRECATED: Use tripleo-heat-templates/environments/ssl/enable-tls.yaml instead.
# ********************************************************************************
# Use this environment to pass in certificates for SSL deployments.
# For these values to take effect, one of the tls-endpoints-*.yaml environments
# must also be used.
parameter_defaults:
  HorizonSecureCookies: True
  SSLCertificate: |
    -----BEGIN CERTIFICATE-----
    MIIDNjCCAh4CAQEwDQYJKoZIhvcNAQELBQAwYjELMAkGA1UEBhMCVVMxCzAJBgNV
    BAgMAk5DMRAwDgYDVQQHDAdSYWxlaWdoMRAwDgYDVQQKDAdSZWQgSEF0MQswCQYD
    VQQLDAJRRTEVMBMGA1UEAwwMMTkyLjE2OC4yNC4yMB4XDTE4MDkyODAxMTc0MFoX
    DTE5MDkyODAxMTc0MFowYDELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAk5DMRAwDgYD
    VQQHDAdSYWxlaWdoMRAwDgYDVQQKDAdSZWQgSEF0MQswCQYDVQQLDAJRRTETMBEG
    A1UEAwwKMTAuMC4wLjEwNjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
    ANIohEps7iGzjPpifrlWeVw6OCfz4EjqoOESPxfGbBsGfbWxL09Sr7HKaVzX2Kdu
    DBynoOC9CgGJgfzfLxBYTKgucVjWm9RwWy8TGMzD0QXwG+CK1vyGRZaoP30N8uE7
    lrp+0CCdg6H+4jpfNw8tlIK+4JVDRwkw2Rt6gzyZetuIy0Abifb/aezp+FPhPq/S
    FTf9UocDHjH6OXuYEaR9NqJHErtYM0bS2zJZPAlg76fetCzXwzEW55ZiETavcvCw
    UIXJQX4FY/W5F7sFzhUliuQ/RADj2lYNzvuDnj+iUGuwND98Xro6fvs6E3f8XIiG
    9YNa4O/aLsyUStXo1qPNf+ECAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAlTS+g6KQ
    fvWvwKz3XlyzoBJfCStvzYTO4tQyugCsLVWQpRjV5sI3sar0E/NOT/QsYkHh+gl8
    vjMjas0J/aLROPp30bxuB48C+Kxr/6kR92HKJz+dDtZros1iuZO7xdPoM4elg3Ao
    4UY12lV1wglYCVpxqnIJZUGpXmKgxraKibIH6ckws80mLcIWgWzLAVfTFmSN4tbK
    PDu0T1K5seAFnBusBSA7s7WXClIWpnNtIuSVczxJBQkdqEoCZxWyedpmXJ3YA22K
    cePJJXyVDszOXS/yYbCC5DRAe2jyiKkC5nbq01JUvp5/ToOOaPyT4gQvfFgGUfNZ
    vZal+2HFyx1vnw==
    -----END CERTIFICATE-----
  SSLIntermediateCertificate: ''
  SSLKey: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpQIBAAKCAQEA0iiESmzuIbOM+mJ+uVZ5XDo4J/PgSOqg4RI/F8ZsGwZ9tbEv
    T1KvscppXNfYp24MHKeg4L0KAYmB/N8vEFhMqC5xWNab1HBbLxMYzMPRBfAb4IrW
    /IZFlqg/fQ3y4TuWun7QIJ2Dof7iOl83Dy2Ugr7glUNHCTDZG3qDPJl624jLQBuJ
    9v9p7On4U+E+r9IVN/1ShwMeMfo5e5gRpH02okcSu1gzRtLbMlk8CWDvp960LNfD
    MRbnlmIRNq9y8LBQhclBfgVj9bkXuwXOFSWK5D9EAOPaVg3O+4OeP6JQa7A0P3xe
    ujp++zoTd/xciIb1g1rg79ouzJRK1ejWo81/4QIDAQABAoIBAAk49fU+KoUYGAu0
    3tLLVLATbfty3FjW0xCNeG9Wqc/VzRZ4HBdjDYrD0zPb1Qoj7iwjvw+dvB6tJiMu
    uCYDefm3cAnyAQylkZrTP7dcsIOOMer8rMqQKeWepIqcXhg8QTUV85Q61Vf6k3r0
    SrpDycyjC8ABH9Drb9ug2LmEErwnhDP7TzXf7xnkPXJc64w7haa52D9tew9Ugtw3
    mF1TFSoqNVnuF6Dxj2XwX6syOpYhW3bm5B9w+D84t5Ib5HOw80BkrW09Ut+PD7xs
    bM7ox05X3u3XCj8giWjXE8xVtolGyAmhY3JcXVCq71Sl90wTD5pjAUbnehg6mFBL
    IH8LZ0ECgYEA/VWnnHK+oJft5UYrGD5uPGhwwVkdpqJ68OAY4D/WK88MMBDfmFrR
    qaAQ3VlJJMKbtdo/IG5yzsP7x6rEXpe98Mp7JozU328KhRL96Kt8VIKTNb8b7UZS
    jwYa3v4zok5QGpjXRidC5odkKy9ZWSO6QF6DL3M8vjatBi4BEia6Zv0CgYEA1F6R
    jUcWr4irqn/K/K62o48mHbSve2eoY/Vb5GeM1D0IXo8a00N8tmw0pN415exjQIZe
    r7n/oSwJ0l/Q3CDsDyUO7CEC7ugRQvJuVDQcu0QxdhJAQ2OC8jeXVTR64uZtKDZc
    uNGmFsIk81u/gIqv5XAXz+/oejmgAoeN+P2eG7UCgYEAiZDtx6L3qTVXVd1YoX2l
    VuBP+m5uD4LDx/GpEH0ZzAhO0fsXsCABIl9tSQWnFnMnIwU0qmPPrD/4hWBR6hv0
    ZMFPRovLPNmLmN/LSYF1rl6JmfIBNEOvqULjbJwVZdmo3giJPessBQsYdersVd90
    0GeCTU4CejgulOB+bGDjb/ECgYEArTLnUCpfFQ/IuYf5u7Sd3azcsDNxPpremY88
    v324PEv+bGgXv571si1zjdZwbLEqjTBJPtXZ5s9svzdHto/fFcbqcafGpaN8mHXx
    hxjtKclG8X2XDZ615+dws5vWcQDum3IiktTguQTjb9oux53UMLIHqZ2Go1Al8Iov
    PpdRAFUCgYEAsnaKFH5IKR8qUav4tzpLuXN6n5KznJUQWj5zTqNTf0DmNypOZg1h
    j4fmPrL23bmXOlQtACP6ONsanvo5QyEX+A+lca38U/744OiSttVCRC84Kzsq3cMN
    44HKWPrVh/cfrziryfRdmnhFGuaEoUtLk8wnDoCjW6TYuzI6HyXUi24=
    -----END RSA PRIVATE KEY-----

  # Disable Gnocchi Incoming redis storage driver when using tls
  GnocchiIncomingStorageDriver: ''

resource_registry:
  OS::TripleO::NodeTLSData: OS::Heat::None
(undercloud) [stack@undercloud-0 ~]$ cat inject-trust-anchor.yaml 
# ********************************************************************************
# DEPRECATED: Use tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml
# instead.
# ********************************************************************************
parameter_defaults:
  SSLRootCertificate: |
    -----BEGIN CERTIFICATE-----
    MIIDlzCCAn+gAwIBAgIJAPItZREi3aHLMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV
    BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH
    UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x
    ODA5MjcyMjIyNTZaFw0xOTA5MjcyMjIyNTZaMGIxCzAJBgNVBAYTAlVTMQswCQYD
    VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG
    A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB
    BQADggEPADCCAQoCggEBAJczX4K6PCqMAu3OBqQTWtqQop9/ll7MXYSqLHjrxHph
    oz5JxGgvMuuUM1y76oQn3mvHmg4nzyS6ZbAjbDymTwDOv5goswIRV8jAGk4eoOCI
    Nk7L2G6o9tyeJRU5BiB9/ZZe3Uis0GBFISatWrWJn6P8MrqsTvIwUK4Sf8Q8XSAv
    wkf+nXwDSGXJxp1/4MS/ZNxp5VNDYZCmNjflklNpCHxj1R+V/ya6dSnuRweFpRKN
    ALbCNQuRX/jUkWDPPzmnm2qeIVDfDn0G0xE0+1JwEMSBRmd2zGY4fpsVVzDD6e5+
    YlIzALoutOcO2BV7QBUeirKqtTLbAl7McnkY7VVGFB0CAwEAAaNQME4wHQYDVR0O
    BBYEFLdAdzJIjg4zu+p2Y7FrEybIie+zMB8GA1UdIwQYMBaAFLdAdzJIjg4zu+p2
    Y7FrEybIie+zMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAEvj95y3
    CVCAg1u90wKNZr4SKmFPq8BOmzexcJpSjJ9VhKQOQty/aNIcidalhbTHrGcY8Km8
    z1NWdpa5iBrCAMBtthZNXrY8TNL6eqVTH8tn+lZN0hxtbfY+LCPR7q0ZdzGp5gh2
    jhGbb8EV9LnE2pps34MsORlq+inpu9TcE8M7Jq83qdz3Zfr8/ppMBuiL7VhFquee
    ebZAGGi4/83gnbv3sbxEN57ZpQY3UC39adWJDXo1J8DKLh61IWRUCCZQpTHSFxik
    MWskTdhftxhJa9iVWvWt0I/cjO82oiGc0oIZUDg9ClqJXn4180dp476I9Nx+/Hm8
    V3fgky8OVieVtxY=
    -----END CERTIFICATE-----

resource_registry:
  OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml
(undercloud) [stack@undercloud-0 ~]$ cat public_vip.yaml 
parameter_defaults:
    PublicVirtualFixedIPs: [{'ip_address':'10.0.0.106'}]

Comment 3 James Slagle 2018-09-28 18:48:50 UTC
also adding DFG:Security as I'm not entirely sure this is a supported operation.

Comment 8 Cédric Jeanneret 2018-10-11 14:30:00 UTC
Creating a reproducer - currently deploying the "non-TLS" overcloud with just one node.

Comment 9 Cédric Jeanneret 2018-10-12 05:26:48 UTC
Hello Marius,

While reviewing once more your issue in order to get the right file content for the custom parts, I just saw the header, and they are deprecated, replaced by other files.

Did you take that into account? Did you try to run the deploy/update with the right file version? There are some tiny differences, might play a role in your issue...

Are you able to run a test on your env?

Cheers,

C.

Comment 10 Cédric Jeanneret 2018-10-12 07:36:04 UTC
So, we have a working reproducer - well, it's failing ;).

After more researches, we have found that:
- the addition of the TLS thingy is mostly working, at least in puppet-tripleo and tripleo-heat-templates (we get the hiera entry for it)
- pacemaker manages the haproxy bundle (due to the VIP)
- apparently puppet-pacemaker doesn't update the CIB with the new volumes, preventing HAProxy to access its certificate bundle (private key + certificate + chain)

So the issue is either:
- puppet-pacemaker doesn't correctly detect the CIB change in [1]
- puppet-pacemaker doesn't update correctly the CIB change in [2]

Still digging in order to find out what's the real issue behind it.


[1] https://github.com/openstack/puppet-pacemaker/blob/master/lib/puppet/provider/pcmk_bundle/default.rb#L142-L144

[2] https://github.com/openstack/puppet-pacemaker/blob/286197f2bd72154b060e87d861f956316f666af8/lib/puppet/provider/pcmk_common.rb#L378

Comment 11 Juan Antonio Osorio 2018-10-12 09:20:38 UTC
Seems to be some issue in the way puppet-pacemaker comparses the new and the old cib.

I find the following in the logs:

Oct 12 08:32:55 overcloud-controller-0 dockerd-current[13031]: Debug: pcmk_resource_has_changed (ng version) returned false for resource haproxy-bundle
Oct 12 08:32:55 overcloud-controller-0 dockerd-current[13031]: Debug: Exists: bundle haproxy-bundle exists 0 location exists 0 deep_compare: true

Comment 12 Michele Baldessari 2018-10-26 15:09:59 UTC
Marius, I proposed now a more definitive version of the patch to fix this in the linked review. If you could give it test (by injecting the patch on the overcloud-full image or via upload-puppet-modules) and report back that would be most appreciated ;)

Comment 13 Marius Cornea 2018-10-27 00:47:04 UTC
(In reply to Michele Baldessari from comment #12)
> Marius, I proposed now a more definitive version of the patch to fix this in
> the linked review. If you could give it test (by injecting the patch on the
> overcloud-full image or via upload-puppet-modules) and report back that
> would be most appreciated ;)

Thanks Michele! I tested the change locally and it worked fine for me.

Comment 33 pkomarov 2018-11-22 17:48:01 UTC
Verified, 

Reproduced using "Steps to Reproduce:"

#check OC update succeeded: 
(undercloud) [stack@undercloud-0 ~]$ tail deploy2.out

Thursday 22 November 2018  11:42:36 -0500 (0:00:00.585)       0:37:00.079 ***** 
=============================================================================== 

Ansible passed.
Overcloud configuration completed.
Overcloud Endpoint: https://10.0.0.106:13000
Overcloud Horizon Dashboard URL: https://10.0.0.106:443/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed
(undercloud) [stack@undercloud-0 ~]$ openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+
| ID                                   | Stack Name | Project                          | Stack Status    | Creation Time        | Updated Time         |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+
| 8d26abe8-0439-47c4-a6c8-d8de843a1e83 | overcloud  | cfa7494e659a4fcdb8636f0bf3a4b422 | UPDATE_COMPLETE | 2018-11-22T14:40:59Z | 2018-11-22T15:42:46Z |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+


#check tls certificates are in place for containers: 

[root@controller-0 ~]# docker inspect rabbitmq-bundle-docker-1|grep -C 3 tls
            "Binds": [
                "/etc/localtime:/etc/localtime:ro",
                "/var/lib/rabbitmq:/var/lib/rabbitmq:rw",
                "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro",
                "/var/log/containers/rabbitmq:/var/log/rabbitmq:rw",
                "/dev/log:/dev/log:rw",
                "/etc/pacemaker/authkey:/etc/pacemaker/authkey",
                "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro",
                "/etc/hosts:/etc/hosts:ro",
                "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro",
                "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro",
                "/var/log/pacemaker/bundles/rabbitmq-bundle-1:/var/log",
                "/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro",
                "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro"
--
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/etc/pki/tls/certs/ca-bundle.trust.crt",
                "Destination": "/etc/pki/tls/certs/ca-bundle.trust.crt",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
--
            },
            {
                "Type": "bind",
                "Source": "/etc/pki/tls/certs/ca-bundle.crt",
                "Destination": "/etc/pki/tls/certs/ca-bundle.crt",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/pki/tls/cert.pem",
                "Destination": "/etc/pki/tls/cert.pem",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"

#check all pacemaker resources are up and running : 
(undercloud) [stack@undercloud-0 ~]$ ansible controller-0 -b -mshell -a'pcs status'
 [WARNING]: Found both group and host with same name: undercloud

controller-0 | SUCCESS | rc=0 >>
Cluster name: tripleo_cluster
Stack: corosync
Current DC: controller-1 (version 1.1.19-8.el7_6.1-c3c624ea3d) - partition with quorum
Last updated: Thu Nov 22 17:37:35 2018
Last change: Thu Nov 22 17:36:52 2018 by hacluster via crmd on controller-0

14 nodes configured
61 resources configured

Online: [ controller-0 controller-1 controller-2 ]
RemoteOnline: [ overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]
GuestOnline: [ galera-bundle-0@controller-2 galera-bundle-1@controller-0 galera-bundle-2@controller-1 rabbitmq-bundle-0@controller-2 rabbitmq-bundle-1@controller-0 rabbitmq-bundle-2@controller-1 redis-bundle-0@controller-2 redis-bundle-1@controller-0 redis-bundle-2@controller-1 ]

Full list of resources:

 overcloud-novacomputeiha-0	(ocf::pacemaker:remote):	Started controller-0
 overcloud-novacomputeiha-1	(ocf::pacemaker:remote):	Started controller-1
 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started controller-2
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started controller-0
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started controller-1
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master controller-2
   galera-bundle-1	(ocf::heartbeat:galera):	Master controller-0
   galera-bundle-2	(ocf::heartbeat:galera):	Master controller-1
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Master controller-2
   redis-bundle-1	(ocf::heartbeat:redis):	Slave controller-0
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-1
 ip-192.168.24.8	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-10.0.0.120	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-172.17.1.28	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.1.16	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-172.17.3.27	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-172.17.4.10	(ocf::heartbeat:IPaddr2):	Started controller-1
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Started controller-2
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Started controller-0
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Started controller-1
 stonith-fence_compute-fence-nova	(stonith:fence_compute):	Stopped
 Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]
     Stopped: [ controller-0 controller-1 controller-2 overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]
 nova-evacuate	(ocf::openstack:NovaEvacuate):	Started controller-2
 stonith-fence_ipmilan-52540050bedd	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-52540030c1e6	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400cec7b7	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400fb0edd	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-5254000791fc	(stonith:fence_ipmilan):	Stopped
 Docker container: openstack-cinder-volume [192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest]
   openstack-cinder-volume-docker-0	(ocf::heartbeat:docker):	Started controller-2
 ip-10.0.0.106	(ocf::heartbeat:IPaddr2):	Started controller-0

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Comment 36 errata-xmlrpc 2019-01-11 11:53:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:0045


Note You need to log in before you can comment on or make changes to this bug.