Bug 1168755

Summary: rubygem-staypuft: need to move to using the rabbitmq resource agent (helps HA setup to recover from disaster)
Product: Red Hat OpenStack Reporter: Alexander Chuzhoy <sasha>
Component: openstack-foreman-installerAssignee: Crag Wolfe <cwolfe>
Status: CLOSED ERRATA QA Contact: Alexander Chuzhoy <sasha>
Severity: high Docs Contact:
Priority: high    
Version: unspecifiedCC: abeekhof, aberezin, cwolfe, dmacpher, dvossel, jeckersb, jguiditt, mburns, morazi, ohochman, rhos-maint, sasha, yeylon
Target Milestone: z1Keywords: ZStream
Target Release: Installer   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-foreman-installer-3.0.14-1.el7ost Doc Type: Enhancement
Doc Text:
This enhancement configures RabbitMQ to use a resource agent rather than systemd. This is because configuring RabbitMQ in Pacemaker and coordinating the service bootstrap is difficult. To ease this, this update creates a resource agent. Deployments now use the resource agent to control RabbitMQ. This is largely invisible to the end user.
Story Points: ---
Clone Of:
: 1184280 (view as bug list) Environment:
Last Closed: 2015-03-05 18:18:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1184280, 1185444, 1185907, 1185909    
Bug Blocks: 1177026    
Attachments:
Description Flags
logs - controller1
none
logs - controller2
none
logs - controller3 - where the error is seen
none
logs - controller1
none
logs - controller2
none
logs - controller3 - where the error is seen
none
foreman logs
none
foreman logs+etc
none
logs - controller1
none
logs - controller2
none
logs - controller3 - where the error is seen none

Description Alexander Chuzhoy 2014-11-27 18:33:04 UTC
rubygem-staypuft:  Puppet reports error: change from notrun to 0 failed: /usr/sbin/rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}' returned 2 instead of one of [0]

Environment:
rhel-osp-installer-client-0.5.1-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.1-1.el7ost.noarch
openstack-puppet-modules-2014.2.5-1.el7ost.noarch
openstack-foreman-installer-3.0.2-1.el7ost.noarch
rhel-osp-installer-0.5.1-1.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch


Steps to reproduce:
1. install rhel-osp-installer
2. Create/run HAneutron deployment with 3 controllers and 2 computes.


Result:

The deployment gets paused with errors.
Checking the puppet reports, see this error on 2 out of 3 controllers:

change from notrun to 0 failed: /usr/sbin/rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}' returned 2 instead of one of [0]

Expected result:

Comment 1 Alexander Chuzhoy 2014-11-27 18:39:13 UTC
Expected result: the specified error shouldn't be reported.

Comment 2 Crag Wolfe 2014-12-02 04:45:17 UTC
I was not able to reproduce this issue, however the recent /etc/hosts additions related to using the new galera resource agent was causing rabbit to not form the cluster correctly.  I'm assuming that is the root cause here.

Comment 3 Crag Wolfe 2014-12-02 04:46:04 UTC
Patch posted:
https://github.com/redhat-openstack/astapor/pull/414

Comment 4 Jason Guiditta 2014-12-03 20:35:24 UTC
Merged, built

Comment 6 Alexander Chuzhoy 2014-12-08 23:47:33 UTC
Verified:
Environment:
rhel-osp-installer-client-0.5.1-1.el7ost.noarch
openstack-puppet-modules-2014.2.6-1.el7ost.noarch
openstack-foreman-installer-3.0.5-1.el7ost.noarch
rhel-osp-installer-0.5.1-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.3-1.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch

The reported issue doesn't reproduce.

Comment 8 Alexander Chuzhoy 2014-12-18 16:52:20 UTC
Reproduced with:

Environment:
rhel-osp-installer-client-0.5.4-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.6-1.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-puppet-modules-2014.2.7-2.el7ost.noarch
openstack-foreman-installer-3.0.7-1.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch

Comment 9 Alexander Chuzhoy 2014-12-18 17:27:49 UTC
Created attachment 970671 [details]
logs - controller1

Comment 10 Alexander Chuzhoy 2014-12-18 17:33:42 UTC
Created attachment 970674 [details]
logs - controller2

Comment 11 Alexander Chuzhoy 2014-12-18 17:35:12 UTC
Created attachment 970686 [details]
logs - controller3 - where the error is seen

Comment 12 Mike Burns 2014-12-18 18:32:23 UTC
this should be resolved with the tcp fix in version 3.0.8

Comment 13 Alexander Chuzhoy 2014-12-23 16:32:26 UTC
Verified: failedQA
Environment:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.8-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.9-1.el7ost.noarch
rhel-osp-installer-client-0.5.4-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch

The issue reproduced.

Comment 14 Alexander Chuzhoy 2014-12-23 16:35:01 UTC
Created attachment 972452 [details]
logs - controller1

Comment 15 Alexander Chuzhoy 2014-12-23 16:36:48 UTC
Created attachment 972453 [details]
logs - controller2

Comment 16 Alexander Chuzhoy 2014-12-23 16:39:43 UTC
Created attachment 972454 [details]
logs - controller3 - where the error is seen

Comment 17 Alexander Chuzhoy 2014-12-23 16:41:07 UTC
Created attachment 972455 [details]
foreman logs

Comment 18 Mike Burns 2015-01-07 17:54:16 UTC
Sasha, can you add /etc and host yaml next time this happens?

Comment 19 Alexander Chuzhoy 2015-01-08 15:53:44 UTC
Will update as soon as this reproduces - intermittent.

Comment 20 Mike Burns 2015-01-09 03:17:02 UTC
setting needinfo until the information is provided

Comment 21 Alexander Chuzhoy 2015-01-09 16:27:26 UTC
yaml from controller1:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCohMzK7fymyfX/pyCh2wm/Jzm3eb28r3sHVt26157mVFhs4LQFS2X8ZjvPu4ixfQ4E8NPt+rd86vsAWUCTS0qIKjDIIcrtkxNzGhpVIE9KnAGTXr/aBCmwMf6pcJ8rgOom5nrLI3wRwHCYqpJvfg4mM+vIRM0Uri2W/NstIXg1xoxFa5hp7dVHll20GkugTy3li2apYCMRmwwjIdu1g7eQkoTWTArX16rkEi75LSsVl+uEvVtXkPrwAsFBRINjEF5Miy8JLmh6mzfsykjTDLu4Wz/wGjZB6yP8Q7wN1pY/gByudV57QtSnsbF5YIxU70rV6DukCuQOhAVx9hVsfInB
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: 6d38c8a38fa50005e6da4b7be6d29b56
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'true'
    backend_nfs_name: nfs
    backend_rbd: 'false'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - 192.168.0.1:/cinder
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: d660bf67-8df8-4940-ad28-58c4076da805
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: ''
    fence_ipmilan_expose_lanplus: ''
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: ''
    fence_ipmilan_username: ''
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: disabled
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: ba1d572281b4679d838f1aa8e195eebb
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: file
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: 192.168.0.1:/glance
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'true'
    pcmk_fs_options: nosharecache,context=\"system_u:object_r:glance_var_lib_t:s0\"
    pcmk_fs_type: nfs
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: 316bfcac128525b53102fa06ed67f938
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin
    admin_password: 17f0428942a2a2489602290d524c923e
    admin_tenant: admin
    admin_token: 5ef4ca29723a574bd877389b4f2dfe5e
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'false'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-tenants:10:15
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-tenants:br-ens7
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ens7:ens7
    - br-ex:ens8
    ovs_tunnel_iface: ''
    ovs_tunnel_network: ''
    ovs_tunnel_types: []
    ovs_vlan_ranges:
    - physnet-tenants:10:15
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 8a68daa734bb0d96d90fe85015b79edf
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 192.168.0.36
    ceilometer_admin_vip: 192.168.0.2
    ceilometer_group: ceilometer
    ceilometer_private_vip: 192.168.0.3
    ceilometer_public_vip: 192.168.0.4
    ceilometer_user_password: 65d64b2ee0642b06b20b158292ebbb13
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: 8a9b537f-d72b-40a0-bbe0-c88b95ec36dc
    ceph_images_key: AQDT9K5U2HSFMhAAikR4ICbcJbB1MxHcGFghoQ==
    ceph_mon_host:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    ceph_mon_initial_members:
    - maca25400702876
    - maca25400702875
    - maca25400702877
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQDT9K5UyHafMBAATDKsISkR9DuJbGOEHN7+Lg==
    cinder_admin_vip: 192.168.0.5
    cinder_db_password: 813a7255bec47db735832fca584eea92
    cinder_group: cinder
    cinder_private_vip: 192.168.0.6
    cinder_public_vip: 192.168.0.12
    cinder_user_password: f318776406fdbc20f7ab2c62b8216c27
    cluster_control_ip: 192.168.0.8
    db_group: db
    db_vip: 192.168.0.13
    glance_admin_vip: 192.168.0.14
    glance_db_password: 92fec4b741d1fc19a9648edafee41f08
    glance_group: glance
    glance_private_vip: 192.168.0.15
    glance_public_vip: 192.168.0.16
    glance_user_password: ce34be90c27f0ec833d7338af59b67d2
    heat_admin_vip: 192.168.0.17
    heat_auth_encryption_key: 80df68944b87400f73deed7509bc569a
    heat_cfn_admin_vip: 192.168.0.20
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 192.168.0.21
    heat_cfn_public_vip: 192.168.0.22
    heat_cfn_user_password: cc8cb760fb4e533caf5a6f830aa4202b
    heat_cloudwatch_enabled: 'true'
    heat_db_password: eeb1b7c3ad2f6b01a4bf57f0130b28b4
    heat_group: heat
    heat_private_vip: 192.168.0.18
    heat_public_vip: 192.168.0.19
    heat_user_password: 16fc55a6289a54dd90db38c7d4fcb9a5
    horizon_admin_vip: 192.168.0.23
    horizon_group: horizon
    horizon_private_vip: 192.168.0.24
    horizon_public_vip: 192.168.0.25
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 192.168.0.26
    keystone_db_password: 0a6d50673b83d414fe60b0978c29fa6c
    keystone_group: keystone
    keystone_private_vip: 192.168.0.27
    keystone_public_vip: 192.168.0.28
    keystone_user_password: ae77eaae59b46cd0886476e30ec6b239
    lb_backend_server_addrs:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    lb_backend_server_names:
    - lb-backend-maca25400702876
    - lb-backend-maca25400702875
    - lb-backend-maca25400702877
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 192.168.0.29
    neutron: 'true'
    neutron_admin_vip: 192.168.0.30
    neutron_db_password: 55a01bbf25b7bdcae0d6180550b50fab
    neutron_group: neutron
    neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
    neutron_private_vip: 192.168.0.31
    neutron_public_vip: 192.168.0.32
    neutron_user_password: 217f9c5802f63519705a6f43aee12d15
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 192.168.0.33
    nova_db_password: e96188774e681f887c5bcb4b251c7ab3
    nova_group: nova
    nova_private_vip: 192.168.0.34
    nova_public_vip: 192.168.0.35
    nova_user_password: 5d97d7109e643eb1a5fef7ba2a002fdb
    pcmk_iface: ''
    pcmk_ip: 192.168.0.7
    pcmk_network: ''
    pcmk_server_addrs:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    pcmk_server_names:
    - pcmk-maca25400702876
    - pcmk-maca25400702875
    - pcmk-maca25400702877
    private_iface: ''
    private_ip: 192.168.0.7
    private_network: ''
    swift_group: swift
    swift_public_vip: 192.168.0.37
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: a690026231cbdaab2b4e51c2ff7048c9
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/HA-neutron/Controller
  root_pw: $1$+mJR/gsN$9tTKz2JuOF0DERB0uEhki.
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.7
  mac: a2:54:00:70:28:75
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCohMzK7fymyfX/pyCh2wm/Jzm3eb28r3sHVt26157mVFhs4LQFS2X8ZjvPu4ixfQ4E8NPt+rd86vsAWUCTS0qIKjDIIcrtkxNzGhpVIE9KnAGTXr/aBCmwMf6pcJ8rgOom5nrLI3wRwHCYqpJvfg4mM+vIRM0Uri2W/NstIXg1xoxFa5hp7dVHll20GkugTy3li2apYCMRmwwjIdu1g7eQkoTWTArX16rkEi75LSsVl+uEvVtXkPrwAsFBRINjEF5Miy8JLmh6mzfsykjTDLu4Wz/wGjZB6yP8Q7wN1pY/gByudV57QtSnsbF5YIxU70rV6DukCuQOhAVx9hVsfInB
  time-zone: America/New_York
  ui::ceph::fsid: 8a9b537f-d72b-40a0-bbe0-c88b95ec36dc
  ui::ceph::images_key: AQDT9K5U2HSFMhAAikR4ICbcJbB1MxHcGFghoQ==
  ui::ceph::volumes_key: AQDT9K5UyHafMBAATDKsISkR9DuJbGOEHN7+Lg==
  ui::cinder::backend_ceph: 'false'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'true'
  ui::cinder::nfs_uri: 192.168.0.1:/cinder
  ui::cinder::rbd_secret_uuid: d660bf67-8df8-4940-ad28-58c4076da805
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: nfs
  ui::glance::nfs_network_path: 192.168.0.1:/glance
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vlan
  ui::neutron::tenant_vlan_ranges: '10:15'
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: 17f0428942a2a2489602290d524c923e
  ui::passwords::amqp: 8a68daa734bb0d96d90fe85015b79edf
  ui::passwords::ceilometer_metering_secret: 6d38c8a38fa50005e6da4b7be6d29b56
  ui::passwords::ceilometer_user: 65d64b2ee0642b06b20b158292ebbb13
  ui::passwords::cinder_db: 813a7255bec47db735832fca584eea92
  ui::passwords::cinder_user: f318776406fdbc20f7ab2c62b8216c27
  ui::passwords::glance_db: 92fec4b741d1fc19a9648edafee41f08
  ui::passwords::glance_user: ce34be90c27f0ec833d7338af59b67d2
  ui::passwords::heat_auth_encrypt_key: 80df68944b87400f73deed7509bc569a
  ui::passwords::heat_cfn_user: cc8cb760fb4e533caf5a6f830aa4202b
  ui::passwords::heat_db: eeb1b7c3ad2f6b01a4bf57f0130b28b4
  ui::passwords::heat_user: 16fc55a6289a54dd90db38c7d4fcb9a5
  ui::passwords::horizon_secret_key: 316bfcac128525b53102fa06ed67f938
  ui::passwords::keystone_admin_token: 5ef4ca29723a574bd877389b4f2dfe5e
  ui::passwords::keystone_db: 0a6d50673b83d414fe60b0978c29fa6c
  ui::passwords::keystone_user: ae77eaae59b46cd0886476e30ec6b239
  ui::passwords::mode: random
  ui::passwords::mysql_root: ba1d572281b4679d838f1aa8e195eebb
  ui::passwords::neutron_db: 55a01bbf25b7bdcae0d6180550b50fab
  ui::passwords::neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
  ui::passwords::neutron_user: 217f9c5802f63519705a6f43aee12d15
  ui::passwords::nova_db: e96188774e681f887c5bcb4b251c7ab3
  ui::passwords::nova_user: 5d97d7109e643eb1a5fef7ba2a002fdb
  ui::passwords::swift_shared_secret: a690026231cbdaab2b4e51c2ff7048c9
  ui::passwords::swift_user: c5a2414b160e30bc10bc5d587f4d8aec
environment: production

Comment 22 Alexander Chuzhoy 2015-01-09 16:28:22 UTC
yaml from controller2:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCohMzK7fymyfX/pyCh2wm/Jzm3eb28r3sHVt26157mVFhs4LQFS2X8ZjvPu4ixfQ4E8NPt+rd86vsAWUCTS0qIKjDIIcrtkxNzGhpVIE9KnAGTXr/aBCmwMf6pcJ8rgOom5nrLI3wRwHCYqpJvfg4mM+vIRM0Uri2W/NstIXg1xoxFa5hp7dVHll20GkugTy3li2apYCMRmwwjIdu1g7eQkoTWTArX16rkEi75LSsVl+uEvVtXkPrwAsFBRINjEF5Miy8JLmh6mzfsykjTDLu4Wz/wGjZB6yP8Q7wN1pY/gByudV57QtSnsbF5YIxU70rV6DukCuQOhAVx9hVsfInB
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: 6d38c8a38fa50005e6da4b7be6d29b56
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'true'
    backend_nfs_name: nfs
    backend_rbd: 'false'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - 192.168.0.1:/cinder
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: d660bf67-8df8-4940-ad28-58c4076da805
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: ''
    fence_ipmilan_expose_lanplus: ''
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: ''
    fence_ipmilan_username: ''
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: disabled
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: ba1d572281b4679d838f1aa8e195eebb
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: file
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: 192.168.0.1:/glance
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'true'
    pcmk_fs_options: nosharecache,context=\"system_u:object_r:glance_var_lib_t:s0\"
    pcmk_fs_type: nfs
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: 316bfcac128525b53102fa06ed67f938
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin
    admin_password: 17f0428942a2a2489602290d524c923e
    admin_tenant: admin
    admin_token: 5ef4ca29723a574bd877389b4f2dfe5e
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'false'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-tenants:10:15
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-tenants:br-ens7
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ens7:ens7
    - br-ex:ens8
    ovs_tunnel_iface: ''
    ovs_tunnel_network: ''
    ovs_tunnel_types: []
    ovs_vlan_ranges:
    - physnet-tenants:10:15
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 8a68daa734bb0d96d90fe85015b79edf
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 192.168.0.36
    ceilometer_admin_vip: 192.168.0.2
    ceilometer_group: ceilometer
    ceilometer_private_vip: 192.168.0.3
    ceilometer_public_vip: 192.168.0.4
    ceilometer_user_password: 65d64b2ee0642b06b20b158292ebbb13
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: 8a9b537f-d72b-40a0-bbe0-c88b95ec36dc
    ceph_images_key: AQDT9K5U2HSFMhAAikR4ICbcJbB1MxHcGFghoQ==
    ceph_mon_host:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    ceph_mon_initial_members:
    - maca25400702876
    - maca25400702875
    - maca25400702877
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQDT9K5UyHafMBAATDKsISkR9DuJbGOEHN7+Lg==
    cinder_admin_vip: 192.168.0.5
    cinder_db_password: 813a7255bec47db735832fca584eea92
    cinder_group: cinder
    cinder_private_vip: 192.168.0.6
    cinder_public_vip: 192.168.0.12
    cinder_user_password: f318776406fdbc20f7ab2c62b8216c27
    cluster_control_ip: 192.168.0.8
    db_group: db
    db_vip: 192.168.0.13
    glance_admin_vip: 192.168.0.14
    glance_db_password: 92fec4b741d1fc19a9648edafee41f08
    glance_group: glance
    glance_private_vip: 192.168.0.15
    glance_public_vip: 192.168.0.16
    glance_user_password: ce34be90c27f0ec833d7338af59b67d2
    heat_admin_vip: 192.168.0.17
    heat_auth_encryption_key: 80df68944b87400f73deed7509bc569a
    heat_cfn_admin_vip: 192.168.0.20
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 192.168.0.21
    heat_cfn_public_vip: 192.168.0.22
    heat_cfn_user_password: cc8cb760fb4e533caf5a6f830aa4202b
    heat_cloudwatch_enabled: 'true'
    heat_db_password: eeb1b7c3ad2f6b01a4bf57f0130b28b4
    heat_group: heat
    heat_private_vip: 192.168.0.18
    heat_public_vip: 192.168.0.19
    heat_user_password: 16fc55a6289a54dd90db38c7d4fcb9a5
    horizon_admin_vip: 192.168.0.23
    horizon_group: horizon
    horizon_private_vip: 192.168.0.24
    horizon_public_vip: 192.168.0.25
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 192.168.0.26
    keystone_db_password: 0a6d50673b83d414fe60b0978c29fa6c
    keystone_group: keystone
    keystone_private_vip: 192.168.0.27
    keystone_public_vip: 192.168.0.28
    keystone_user_password: ae77eaae59b46cd0886476e30ec6b239
    lb_backend_server_addrs:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    lb_backend_server_names:
    - lb-backend-maca25400702876
    - lb-backend-maca25400702875
    - lb-backend-maca25400702877
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 192.168.0.29
    neutron: 'true'
    neutron_admin_vip: 192.168.0.30
    neutron_db_password: 55a01bbf25b7bdcae0d6180550b50fab
    neutron_group: neutron
    neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
    neutron_private_vip: 192.168.0.31
    neutron_public_vip: 192.168.0.32
    neutron_user_password: 217f9c5802f63519705a6f43aee12d15
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 192.168.0.33
    nova_db_password: e96188774e681f887c5bcb4b251c7ab3
    nova_group: nova
    nova_private_vip: 192.168.0.34
    nova_public_vip: 192.168.0.35
    nova_user_password: 5d97d7109e643eb1a5fef7ba2a002fdb
    pcmk_iface: ''
    pcmk_ip: 192.168.0.8
    pcmk_network: ''
    pcmk_server_addrs:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    pcmk_server_names:
    - pcmk-maca25400702876
    - pcmk-maca25400702875
    - pcmk-maca25400702877
    private_iface: ''
    private_ip: 192.168.0.8
    private_network: ''
    swift_group: swift
    swift_public_vip: 192.168.0.37
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: a690026231cbdaab2b4e51c2ff7048c9
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/HA-neutron/Controller
  root_pw: $1$+mJR/gsN$9tTKz2JuOF0DERB0uEhki.
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.8
  mac: a2:54:00:70:28:76
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCohMzK7fymyfX/pyCh2wm/Jzm3eb28r3sHVt26157mVFhs4LQFS2X8ZjvPu4ixfQ4E8NPt+rd86vsAWUCTS0qIKjDIIcrtkxNzGhpVIE9KnAGTXr/aBCmwMf6pcJ8rgOom5nrLI3wRwHCYqpJvfg4mM+vIRM0Uri2W/NstIXg1xoxFa5hp7dVHll20GkugTy3li2apYCMRmwwjIdu1g7eQkoTWTArX16rkEi75LSsVl+uEvVtXkPrwAsFBRINjEF5Miy8JLmh6mzfsykjTDLu4Wz/wGjZB6yP8Q7wN1pY/gByudV57QtSnsbF5YIxU70rV6DukCuQOhAVx9hVsfInB
  time-zone: America/New_York
  ui::ceph::fsid: 8a9b537f-d72b-40a0-bbe0-c88b95ec36dc
  ui::ceph::images_key: AQDT9K5U2HSFMhAAikR4ICbcJbB1MxHcGFghoQ==
  ui::ceph::volumes_key: AQDT9K5UyHafMBAATDKsISkR9DuJbGOEHN7+Lg==
  ui::cinder::backend_ceph: 'false'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'true'
  ui::cinder::nfs_uri: 192.168.0.1:/cinder
  ui::cinder::rbd_secret_uuid: d660bf67-8df8-4940-ad28-58c4076da805
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: nfs
  ui::glance::nfs_network_path: 192.168.0.1:/glance
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vlan
  ui::neutron::tenant_vlan_ranges: '10:15'
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: 17f0428942a2a2489602290d524c923e
  ui::passwords::amqp: 8a68daa734bb0d96d90fe85015b79edf
  ui::passwords::ceilometer_metering_secret: 6d38c8a38fa50005e6da4b7be6d29b56
  ui::passwords::ceilometer_user: 65d64b2ee0642b06b20b158292ebbb13
  ui::passwords::cinder_db: 813a7255bec47db735832fca584eea92
  ui::passwords::cinder_user: f318776406fdbc20f7ab2c62b8216c27
  ui::passwords::glance_db: 92fec4b741d1fc19a9648edafee41f08
  ui::passwords::glance_user: ce34be90c27f0ec833d7338af59b67d2
  ui::passwords::heat_auth_encrypt_key: 80df68944b87400f73deed7509bc569a
  ui::passwords::heat_cfn_user: cc8cb760fb4e533caf5a6f830aa4202b
  ui::passwords::heat_db: eeb1b7c3ad2f6b01a4bf57f0130b28b4
  ui::passwords::heat_user: 16fc55a6289a54dd90db38c7d4fcb9a5
  ui::passwords::horizon_secret_key: 316bfcac128525b53102fa06ed67f938
  ui::passwords::keystone_admin_token: 5ef4ca29723a574bd877389b4f2dfe5e
  ui::passwords::keystone_db: 0a6d50673b83d414fe60b0978c29fa6c
  ui::passwords::keystone_user: ae77eaae59b46cd0886476e30ec6b239
  ui::passwords::mode: random
  ui::passwords::mysql_root: ba1d572281b4679d838f1aa8e195eebb
  ui::passwords::neutron_db: 55a01bbf25b7bdcae0d6180550b50fab
  ui::passwords::neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
  ui::passwords::neutron_user: 217f9c5802f63519705a6f43aee12d15
  ui::passwords::nova_db: e96188774e681f887c5bcb4b251c7ab3
  ui::passwords::nova_user: 5d97d7109e643eb1a5fef7ba2a002fdb
  ui::passwords::swift_shared_secret: a690026231cbdaab2b4e51c2ff7048c9
  ui::passwords::swift_user: c5a2414b160e30bc10bc5d587f4d8aec
environment: production

Comment 23 Alexander Chuzhoy 2015-01-09 16:29:27 UTC
yaml from controller3 - where the error is reported:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCohMzK7fymyfX/pyCh2wm/Jzm3eb28r3sHVt26157mVFhs4LQFS2X8ZjvPu4ixfQ4E8NPt+rd86vsAWUCTS0qIKjDIIcrtkxNzGhpVIE9KnAGTXr/aBCmwMf6pcJ8rgOom5nrLI3wRwHCYqpJvfg4mM+vIRM0Uri2W/NstIXg1xoxFa5hp7dVHll20GkugTy3li2apYCMRmwwjIdu1g7eQkoTWTArX16rkEi75LSsVl+uEvVtXkPrwAsFBRINjEF5Miy8JLmh6mzfsykjTDLu4Wz/wGjZB6yP8Q7wN1pY/gByudV57QtSnsbF5YIxU70rV6DukCuQOhAVx9hVsfInB
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: 6d38c8a38fa50005e6da4b7be6d29b56
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'true'
    backend_nfs_name: nfs
    backend_rbd: 'false'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - 192.168.0.1:/cinder
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: d660bf67-8df8-4940-ad28-58c4076da805
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: ''
    fence_ipmilan_expose_lanplus: ''
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: ''
    fence_ipmilan_username: ''
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: disabled
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: ba1d572281b4679d838f1aa8e195eebb
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: file
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: 192.168.0.1:/glance
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'true'
    pcmk_fs_options: nosharecache,context=\"system_u:object_r:glance_var_lib_t:s0\"
    pcmk_fs_type: nfs
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: 316bfcac128525b53102fa06ed67f938
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin
    admin_password: 17f0428942a2a2489602290d524c923e
    admin_tenant: admin
    admin_token: 5ef4ca29723a574bd877389b4f2dfe5e
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'false'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-tenants:10:15
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-tenants:br-ens7
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ens7:ens7
    - br-ex:ens8
    ovs_tunnel_iface: ''
    ovs_tunnel_network: ''
    ovs_tunnel_types: []
    ovs_vlan_ranges:
    - physnet-tenants:10:15
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 8a68daa734bb0d96d90fe85015b79edf
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 192.168.0.36
    ceilometer_admin_vip: 192.168.0.2
    ceilometer_group: ceilometer
    ceilometer_private_vip: 192.168.0.3
    ceilometer_public_vip: 192.168.0.4
    ceilometer_user_password: 65d64b2ee0642b06b20b158292ebbb13
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: 8a9b537f-d72b-40a0-bbe0-c88b95ec36dc
    ceph_images_key: AQDT9K5U2HSFMhAAikR4ICbcJbB1MxHcGFghoQ==
    ceph_mon_host:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    ceph_mon_initial_members:
    - maca25400702876
    - maca25400702875
    - maca25400702877
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQDT9K5UyHafMBAATDKsISkR9DuJbGOEHN7+Lg==
    cinder_admin_vip: 192.168.0.5
    cinder_db_password: 813a7255bec47db735832fca584eea92
    cinder_group: cinder
    cinder_private_vip: 192.168.0.6
    cinder_public_vip: 192.168.0.12
    cinder_user_password: f318776406fdbc20f7ab2c62b8216c27
    cluster_control_ip: 192.168.0.8
    db_group: db
    db_vip: 192.168.0.13
    glance_admin_vip: 192.168.0.14
    glance_db_password: 92fec4b741d1fc19a9648edafee41f08
    glance_group: glance
    glance_private_vip: 192.168.0.15
    glance_public_vip: 192.168.0.16
    glance_user_password: ce34be90c27f0ec833d7338af59b67d2
    heat_admin_vip: 192.168.0.17
    heat_auth_encryption_key: 80df68944b87400f73deed7509bc569a
    heat_cfn_admin_vip: 192.168.0.20
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 192.168.0.21
    heat_cfn_public_vip: 192.168.0.22
    heat_cfn_user_password: cc8cb760fb4e533caf5a6f830aa4202b
    heat_cloudwatch_enabled: 'true'
    heat_db_password: eeb1b7c3ad2f6b01a4bf57f0130b28b4
    heat_group: heat
    heat_private_vip: 192.168.0.18
    heat_public_vip: 192.168.0.19
    heat_user_password: 16fc55a6289a54dd90db38c7d4fcb9a5
    horizon_admin_vip: 192.168.0.23
    horizon_group: horizon
    horizon_private_vip: 192.168.0.24
    horizon_public_vip: 192.168.0.25
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 192.168.0.26
    keystone_db_password: 0a6d50673b83d414fe60b0978c29fa6c
    keystone_group: keystone
    keystone_private_vip: 192.168.0.27
    keystone_public_vip: 192.168.0.28
    keystone_user_password: ae77eaae59b46cd0886476e30ec6b239
    lb_backend_server_addrs:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    lb_backend_server_names:
    - lb-backend-maca25400702876
    - lb-backend-maca25400702875
    - lb-backend-maca25400702877
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 192.168.0.29
    neutron: 'true'
    neutron_admin_vip: 192.168.0.30
    neutron_db_password: 55a01bbf25b7bdcae0d6180550b50fab
    neutron_group: neutron
    neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
    neutron_private_vip: 192.168.0.31
    neutron_public_vip: 192.168.0.32
    neutron_user_password: 217f9c5802f63519705a6f43aee12d15
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 192.168.0.33
    nova_db_password: e96188774e681f887c5bcb4b251c7ab3
    nova_group: nova
    nova_private_vip: 192.168.0.34
    nova_public_vip: 192.168.0.35
    nova_user_password: 5d97d7109e643eb1a5fef7ba2a002fdb
    pcmk_iface: ''
    pcmk_ip: 192.168.0.10
    pcmk_network: ''
    pcmk_server_addrs:
    - 192.168.0.8
    - 192.168.0.7
    - 192.168.0.10
    pcmk_server_names:
    - pcmk-maca25400702876
    - pcmk-maca25400702875
    - pcmk-maca25400702877
    private_iface: ''
    private_ip: 192.168.0.10
    private_network: ''
    swift_group: swift
    swift_public_vip: 192.168.0.37
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: a690026231cbdaab2b4e51c2ff7048c9
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/HA-neutron/Controller
  root_pw: $1$+mJR/gsN$9tTKz2JuOF0DERB0uEhki.
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.10
  mac: a2:54:00:70:28:77
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCohMzK7fymyfX/pyCh2wm/Jzm3eb28r3sHVt26157mVFhs4LQFS2X8ZjvPu4ixfQ4E8NPt+rd86vsAWUCTS0qIKjDIIcrtkxNzGhpVIE9KnAGTXr/aBCmwMf6pcJ8rgOom5nrLI3wRwHCYqpJvfg4mM+vIRM0Uri2W/NstIXg1xoxFa5hp7dVHll20GkugTy3li2apYCMRmwwjIdu1g7eQkoTWTArX16rkEi75LSsVl+uEvVtXkPrwAsFBRINjEF5Miy8JLmh6mzfsykjTDLu4Wz/wGjZB6yP8Q7wN1pY/gByudV57QtSnsbF5YIxU70rV6DukCuQOhAVx9hVsfInB
  time-zone: America/New_York
  ui::ceph::fsid: 8a9b537f-d72b-40a0-bbe0-c88b95ec36dc
  ui::ceph::images_key: AQDT9K5U2HSFMhAAikR4ICbcJbB1MxHcGFghoQ==
  ui::ceph::volumes_key: AQDT9K5UyHafMBAATDKsISkR9DuJbGOEHN7+Lg==
  ui::cinder::backend_ceph: 'false'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'true'
  ui::cinder::nfs_uri: 192.168.0.1:/cinder
  ui::cinder::rbd_secret_uuid: d660bf67-8df8-4940-ad28-58c4076da805
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: nfs
  ui::glance::nfs_network_path: 192.168.0.1:/glance
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vlan
  ui::neutron::tenant_vlan_ranges: '10:15'
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: 17f0428942a2a2489602290d524c923e
  ui::passwords::amqp: 8a68daa734bb0d96d90fe85015b79edf
  ui::passwords::ceilometer_metering_secret: 6d38c8a38fa50005e6da4b7be6d29b56
  ui::passwords::ceilometer_user: 65d64b2ee0642b06b20b158292ebbb13
  ui::passwords::cinder_db: 813a7255bec47db735832fca584eea92
  ui::passwords::cinder_user: f318776406fdbc20f7ab2c62b8216c27
  ui::passwords::glance_db: 92fec4b741d1fc19a9648edafee41f08
  ui::passwords::glance_user: ce34be90c27f0ec833d7338af59b67d2
  ui::passwords::heat_auth_encrypt_key: 80df68944b87400f73deed7509bc569a
  ui::passwords::heat_cfn_user: cc8cb760fb4e533caf5a6f830aa4202b
  ui::passwords::heat_db: eeb1b7c3ad2f6b01a4bf57f0130b28b4
  ui::passwords::heat_user: 16fc55a6289a54dd90db38c7d4fcb9a5
  ui::passwords::horizon_secret_key: 316bfcac128525b53102fa06ed67f938
  ui::passwords::keystone_admin_token: 5ef4ca29723a574bd877389b4f2dfe5e
  ui::passwords::keystone_db: 0a6d50673b83d414fe60b0978c29fa6c
  ui::passwords::keystone_user: ae77eaae59b46cd0886476e30ec6b239
  ui::passwords::mode: random
  ui::passwords::mysql_root: ba1d572281b4679d838f1aa8e195eebb
  ui::passwords::neutron_db: 55a01bbf25b7bdcae0d6180550b50fab
  ui::passwords::neutron_metadata_proxy_secret: 598eefcd5dcc7dbe6d1b339b8fb52b55
  ui::passwords::neutron_user: 217f9c5802f63519705a6f43aee12d15
  ui::passwords::nova_db: e96188774e681f887c5bcb4b251c7ab3
  ui::passwords::nova_user: 5d97d7109e643eb1a5fef7ba2a002fdb
  ui::passwords::swift_shared_secret: a690026231cbdaab2b4e51c2ff7048c9
  ui::passwords::swift_user: c5a2414b160e30bc10bc5d587f4d8aec
environment: production

Comment 24 Alexander Chuzhoy 2015-01-09 16:37:38 UTC
Created attachment 978289 [details]
foreman logs+etc

Comment 25 Jason Guiditta 2015-01-09 16:39:33 UTC
When running the command in question on the problematic node, I see:

Setting policy "HA" for pattern "^(?!amq\\.).*" to "{\"ha-mode\": \"all\"}" with priority "0" ...
Error: unable to connect to node 'rabbit@lb-backend-maca25400702877': nodedown

DIAGNOSTICS
===========

attempted to contact: ['rabbit@lb-backend-maca25400702877']

rabbit@lb-backend-maca25400702877:
  * connected to epmd (port 4369) on lb-backend-maca25400702877
  * epmd reports: node 'rabbit' not running at all
                  other nodes on lb-backend-maca25400702877: [rabbitmqctl2020]
  * suggestion: start the node

current node details:
- node name: rabbitmqctl2020@maca25400702877
- home dir: /var/lib/rabbitmq
- cookie hash: soeIWU2jk2YNseTyDSlsEA==

and then I see:

rabbitmqctl cluster_status
Cluster status of node 'rabbit@lb-backend-maca25400702877' ...
Error: unable to connect to node 'rabbit@lb-backend-maca25400702877': nodedown

DIAGNOSTICS
===========
attempted to contact: ['rabbit@lb-backend-maca25400702877']

rabbit@lb-backend-maca25400702877:
  * connected to epmd (port 4369) on lb-backend-maca25400702877
  * epmd reports: node 'rabbit' not running at all
                  other nodes on lb-backend-maca25400702877: [rabbitmqctl7957]
  * suggestion: start the node

current node details:
- node name: rabbitmqctl7957@maca25400702877
- home dir: /var/lib/rabbitmq
- cookie hash: soeIWU2jk2YNseTyDSlsEA==


rabbitmq-server service is dead.  Service restart starts it and removes the error on the above command.  Suspect network issue on initial setup, requiring service restart, which apparently does not happen in the puppet run

Comment 26 Alexander Chuzhoy 2015-01-09 16:39:48 UTC
Created attachment 978290 [details]
logs - controller1

Comment 27 Alexander Chuzhoy 2015-01-09 16:40:47 UTC
Created attachment 978291 [details]
logs - controller2

Comment 28 Alexander Chuzhoy 2015-01-09 16:42:18 UTC
Created attachment 978293 [details]
logs - controller3 - where the error is seen

Comment 29 Mike Burns 2015-01-12 15:21:49 UTC
are we saying this is a networking issue and doesn't reproduce otherwise?

Comment 30 Jason Guiditta 2015-01-12 15:25:47 UTC
That is how it appears from what I saw on the described setup, but hard to say 100%

Comment 31 Mike Burns 2015-01-12 17:58:33 UTC
Are there clear workaround steps if this fails?

Comment 32 John Eckersberg 2015-01-13 20:21:26 UTC
https://github.com/redhat-openstack/astapor/pull/453

Comment 33 Alexander Chuzhoy 2015-01-15 19:13:10 UTC
Reproduced with:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.10-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.11-1.el7ost.noarch
rhel-osp-installer-client-0.5.5-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.5-1.el7ost.noarch

Comment 35 Omri Hochman 2015-01-20 17:20:29 UTC
When encountering this bug - 'pcs status' show the following failed action : 
----------------------------------------------------------------------------
[root@maca25400702876 ~]# pcs status
Cluster name: openstack
Last updated: Tue Jan 20 11:01:23 2015
Last change: Mon Jan 19 17:48:59 2015 via cibadmin on pcmk-maca25400702875
Stack: corosync
Current DC: pcmk-maca25400702876 (3) - partition with quorum
Version: 1.1.10-32.el7_0.1-368c726
3 Nodes configured
110 Resources configured


Online: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]

Full list of resources:

 ip-ceilometer-pub-192.168.0.4	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-ceilometer-prv-192.168.0.3	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-ceilometer-adm-192.168.0.2	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-horizon-pub-192.168.0.25	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-horizon-adm-192.168.0.23	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-amqp-pub-192.168.0.36	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-loadbalancer-pub-192.168.0.29	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 ip-horizon-prv-192.168.0.24	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 Clone Set: memcached-clone [memcached]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: haproxy-clone [haproxy]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-galera-pub-192.168.0.13	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 Master/Slave Set: galera-master [galera]
     Masters: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-keystone-pub-192.168.0.28	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-keystone-adm-192.168.0.26	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-keystone-prv-192.168.0.27	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 Clone Set: keystone-clone [keystone]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-glance-pub-192.168.0.16	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 ip-glance-prv-192.168.0.15	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 Clone Set: fs-varlibglanceimages-clone [fs-varlibglanceimages]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-glance-adm-192.168.0.14	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 Clone Set: glance-registry-clone [glance-registry]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: glance-api-clone [glance-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-nova-pub-192.168.0.35	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-nova-adm-192.168.0.33	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-nova-prv-192.168.0.34	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-cinder-pub-192.168.0.12	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 ip-cinder-adm-192.168.0.5	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 ip-cinder-prv-192.168.0.6	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702876 
 Clone Set: cinder-api-clone [cinder-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: cinder-scheduler-clone [cinder-scheduler]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: cinder-volume-clone [cinder-volume]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-heat-pub-192.168.0.19	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-heat-adm-192.168.0.17	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-heat-prv-192.168.0.18	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702875 
 ip-heat_cfn-pub-192.168.0.22	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-heat_cfn-prv-192.168.0.21	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 ip-heat_cfn-adm-192.168.0.20	(ocf::heartbeat:IPaddr2):	Started pcmk-maca25400702877 
 Clone Set: heat-api-clone [heat-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Resource Group: heat
     openstack-heat-engine	(systemd:openstack-heat-engine):	Started pcmk-maca25400702876 
 Clone Set: heat-api-cfn-clone [heat-api-cfn]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: heat-api-cloudwatch-clone [heat-api-cloudwatch]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: horizon-clone [horizon]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: mongod-clone [mongod]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 openstack-ceilometer-central	(systemd:openstack-ceilometer-central):	Started pcmk-maca25400702876 
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: ceilometer-delay-clone [ceilometer-delay]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: rabbitmq-server-clone [rabbitmq-server]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702877 ]
     Stopped: [ pcmk-maca25400702876 ]

Failed actions:
    rabbitmq-server_start_0 on pcmk-maca25400702876 'OCF_PENDING' (196): call=96, status=complete, last-rc-change='Mon Jan 19 17:23:43 2015', queued=2ms, exec=2000ms


PCSD Status:
  pcmk-maca25400702875: Online
  pcmk-maca25400702877: Online
  pcmk-maca25400702876: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled

Comment 36 Omri Hochman 2015-01-20 17:22:16 UTC
Workaround to finish the deployment will be:
--------------------------------------------
Run on the controller that had the problem.  
 (1) restart rabbit-server 
 (2) pcs resource cleanup rabbitmq-server  
 (3) puppet agent -tv  
 (4) resume deployment from rhel-osp-installer GUI. 


To find which is the problematic hosts : 
Check the GUI : deployment status -> dynflow-console -> Click on the last error to see error details.

Comment 37 Andrew Beekhof 2015-01-21 05:05:17 UTC
I see:

> Clone Set: rabbitmq-server-clone [rabbitmq-server]
>     Started: [ pcmk-maca25400702875 pcmk-maca25400702877 ]
>     Stopped: [ pcmk-maca25400702876 ]
>
>Failed actions:
>    rabbitmq-server_start_0 on pcmk-maca25400702876 'OCF_PENDING' (196): call=96, status=complete, last-rc-change='Mon Jan 19 17:23:43 2015', queued=2ms, exec=2000ms


Do we have any information/logs about why its failing in the first place?
(The attached logs relate to a different install)

Perhaps the default timeout is too short for rabbitmq

Comment 38 David Vossel 2015-01-23 18:39:44 UTC
I created the rabbitmq-cluster OCF agent. It handles automatically bootstrapping and reliably recovering the rabbitmq-server instances.

To use this agent, you must make a few changes to the installer.

1.
replace usage similar to 'pcs resource create rmq systemd:rabbitmq-server --clone'

with

pcs resource create rmq rabbitmq-cluster set_policy='HA ^(?!amq\.).* {"ha-mode":"all"}' clone ordered=true

The set_policy argument will initialize the policy on bootstrap.

2.
Let the agent set the ha policy. If there are any other steps done during bootstrap let me know immediately. We need the agent to be able to automate bootstrap in the event of something like a power outage. 

3.
Remove setting cluster_nodes from /etc/rabbitmq/rabbitmq.config file.

This agent is smart. It knows how to bootstrap, and it knows how to join new rabbitmq-server instances into the cluster dynamically. Having the node list explicitly set in the rabbitmq.config file actually hinders this agents ability to bootstrap and join the cluster reliably.

4.
Before this agent can be used, we have to have an updated selinux policy. This issue is tracking that. https://bugzilla.redhat.com/show_bug.cgi?id=1185444

-- David

Comment 41 Crag Wolfe 2015-01-27 18:19:31 UTC
Is there a way to specify the mapping of rabbitmq cluster node names to pacemaker cluster node names to the pcs resource create command?

Comment 42 David Vossel 2015-01-27 18:29:45 UTC
(In reply to Crag Wolfe from comment #41)
> Is there a way to specify the mapping of rabbitmq cluster node names to
> pacemaker cluster node names to the pcs resource create command?

You don't need to. I made it all work using magic!

As long as each node is configured with the correct rabbitmq node name in the local /etc/rabbitmq/rabbitmq-env.conf file, you're good to go (I believe the puppet scripts already do this). The agent builds the rabbitmq cluster dynamically based on what nodes the agent is allowed to run on.

-- David

Comment 45 Jason Guiditta 2015-02-05 15:30:25 UTC
Merged https://github.com/redhat-openstack/astapor/pull/465

Comment 47 Alexander Chuzhoy 2015-02-18 18:39:21 UTC
Verified:
Environment:
rhel-osp-installer-client-0.5.5-5.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.15-1.el7ost.noarch
rhel-osp-installer-0.5.5-5.el7ost.noarch
ruby193-rubygem-staypuft-0.5.19-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-2.el7ost.noarch



[root@maca25400702875 ~(openstack_admin)]# pcs resource show rabbitmq-server-clone
 Clone: rabbitmq-server-clone
  Meta Attrs: ordered=true
  Resource: rabbitmq-server (class=ocf provider=heartbeat type=rabbitmq-cluster)
   Attributes: set_policy="HA ^(?!amq\.).* {"ha-mode":"all"}"
   Operations: start interval=0s timeout=100 (rabbitmq-server-start-timeout-100)
               stop interval=0s timeout=90 (rabbitmq-server-stop-timeout-90)
               monitor interval=10 timeout=40 (rabbitmq-server-monitor-interval-10)

Comment 49 errata-xmlrpc 2015-03-05 18:18:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0641.html