Bug 1173634 - rubygem-staypuft: Deployment runs without completing/failing for more than 17 hours.
Summary: rubygem-staypuft: Deployment runs without completing/failing for more than 17...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhel-osp-installer
Version: unspecified
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ga
: Installer
Assignee: Jiri Stransky
QA Contact: Alexander Chuzhoy
URL:
Whiteboard:
: 1173808 (view as bug list)
Depends On:
Blocks: 1177026 1182576 1182581 1184630
TreeView+ depends on / blocked
 
Reported: 2014-12-12 14:58 UTC by Alexander Chuzhoy
Modified: 2015-02-10 02:35 UTC (History)
7 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.5.12-1.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1182576 1182581 (view as bug list)
Environment:
Last Closed: 2015-02-09 15:18:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (721.50 KB, application/x-gzip)
2014-12-12 15:06 UTC, Alexander Chuzhoy
no flags Details
logs+conf controoler1 (6.82 MB, application/x-gzip)
2014-12-19 18:47 UTC, Alexander Chuzhoy
no flags Details
logs+conf controller2 (6.87 MB, application/x-gzip)
2014-12-19 18:48 UTC, Alexander Chuzhoy
no flags Details
logs+conf controller3 (6.57 MB, application/x-gzip)
2014-12-19 18:49 UTC, Alexander Chuzhoy
no flags Details
foreman logs (102.46 KB, application/x-gzip)
2014-12-19 19:01 UTC, Alexander Chuzhoy
no flags Details
foreman logs (437.00 KB, application/x-gzip)
2015-01-06 20:04 UTC, Alexander Chuzhoy
no flags Details
logs - controller1 (9.23 MB, application/x-gzip)
2015-01-06 20:05 UTC, Alexander Chuzhoy
no flags Details
logs - controller2 (8.85 MB, application/x-gzip)
2015-01-06 20:07 UTC, Alexander Chuzhoy
no flags Details
logs - controller3 (7.14 MB, application/x-gzip)
2015-01-06 20:08 UTC, Alexander Chuzhoy
no flags Details
production.log (1.93 MB, text/plain)
2015-01-14 20:13 UTC, Omri Hochman
no flags Details
startup_race_console.html (122.69 KB, text/html)
2015-01-15 10:29 UTC, Jiri Stransky
no flags Details
startup_race_messages_nok (2.85 MB, text/plain)
2015-01-15 10:30 UTC, Jiri Stransky
no flags Details
startup_race_messages_ok (3.54 MB, text/plain)
2015-01-15 10:30 UTC, Jiri Stransky
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0156 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Installer Bug Fix Advisory 2015-02-09 20:13:39 UTC

Description Alexander Chuzhoy 2014-12-12 14:58:12 UTC
rubygem-staypuft: Deployment runs without completing/failing for more than 17 hours.

Environment:
openstack-puppet-modules-2014.2.7-1.el7ost.noarch
rhel-osp-installer-0.5.2-2.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
rhel-osp-installer-client-0.5.2-2.el7ost.noarch
openstack-foreman-installer-3.0.5-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.4-1.el7ost.noarch


Steps to reproduce:
1. Install rhel-osp-installer
2. Create/start HAneutron deployment with 3 controllers and 1 compute host.


Result:

The deployment shows as running and doesn't complete nor does it fail.

Expected result:
The deployment should complete successfully (or fail due to some particular error).

Comment 2 Alexander Chuzhoy 2014-12-12 15:06:17 UTC
Created attachment 967676 [details]
logs

Comment 6 Alexander Chuzhoy 2014-12-13 05:42:54 UTC
Verified: FailedQA

Environment:
rhel-osp-installer-client-0.5.3-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.5-1.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.5-1.el7ost.noarch
openstack-puppet-modules-2014.2.7-2.el7ost.noarch
rhel-osp-installer-0.5.3-1.el7ost.noarch


Have a deployment running already for 8 hours.
This could be related to (or be the same issue) https://bugzilla.redhat.com/show_bug.cgi?id=1173808

Comment 9 Mike Burns 2014-12-17 13:19:38 UTC
*** Bug 1173808 has been marked as a duplicate of this bug. ***

Comment 10 Mike Burns 2014-12-17 13:20:58 UTC
This is not reproducing, I'm moving back to ON_QA.  please move back if it reproduces and keep the environment around for debugging.  

Thanks

Comment 12 Alexander Chuzhoy 2014-12-17 23:39:30 UTC
Verified:
Environment:
openstack-puppet-modules-2014.2.7-2.el7ost.noarch
puppet-3.6.2-2.el7.noarch
openstack-foreman-installer-3.0.7-1.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch
puppet-server-3.6.2-2.el7.noarch

The bug doesn't reproduce.

Comment 13 Alexander Chuzhoy 2014-12-19 14:36:37 UTC
Verified: FailedQA


Reproduced HAneutron with GRE network:
Environment:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.8-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.9-1.el7ost.noarch
rhel-osp-installer-client-0.5.4-1.el7ost.noarch
openstack-puppet-modules-2014.2.7-2.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch

Comment 14 Mike Burns 2014-12-19 15:22:50 UTC
We would like to see logs and more information on what happened in this environment.  Saying it failed doesn't give us enough information to investigate.

Comment 15 Alexander Chuzhoy 2014-12-19 18:47:31 UTC
Created attachment 971291 [details]
logs+conf controoler1

Comment 16 Alexander Chuzhoy 2014-12-19 18:48:31 UTC
Created attachment 971292 [details]
logs+conf controller2

Comment 17 Alexander Chuzhoy 2014-12-19 18:49:26 UTC
Created attachment 971293 [details]
logs+conf controller3

Comment 18 Alexander Chuzhoy 2014-12-19 18:52:00 UTC
the logs+conf taken from another setup where:
1. the last puppet report is from more than 3 hours ago
2. puppet isn't running on any controller
3. The deployment's state is "running"

Comment 19 Alexander Chuzhoy 2014-12-19 19:01:13 UTC
Created attachment 971295 [details]
foreman logs

Comment 20 Alexander Chuzhoy 2014-12-29 16:34:34 UTC
The following was observed - nonHA neutron deployment:

1. The puppet already ran once on the controller+compute nodes. The only error in the puppet report was the one reported in BZ #1175399

2. The puppet didn't run on the hosts (for hours) after that single run.

3. I manually ran puppet on the controller and on the compute.

4. The puppet run completed successfully, triggering more puppet runs. 

5. The deployment was completed successfully.

Comment 21 Jiri Stransky 2015-01-05 12:50:29 UTC
This is a different problem which would warrant a new BZ, i'd say. The original problem was a race condition -- running puppet agent too soon after the previous one would cause it to not run at all:

Dec 11 17:55:43 maca25400702876 puppet-agent[4013]: Finished catalog run in 1939.18 seconds
Dec 11 17:56:12 maca25400702876 puppet-agent[22063]: Run of Puppet configuration client already in progress; skipping  (/var/lib/puppet/state/agent_catalog_run.lock exists)

But that ^ is not the case with the newest attached logs. I looked through /var/log/messages and it seems the first puppet run never got to the "Finished catalog run" state. The true problem is that creating a pacemaker cluster failed for some reason, and then the puppet agent skipped configuring almost everything and eventually either hanged or got somehow killed, which i think cannot be read from the logs unforutnately. But it looks like it never finished and never got to send its report to Foreman.

I wonder if this can be somehow solved from DynFlow. I think it should be possible to make waiting for puppet report timeout somehow, but it still wouldn't address the root cause of this problem and the machine would stand for e.g. 2 hours doing nothing before attempting to proceed. I think we'll need to debug the root cause of pacemaker not getting set up.


Dec 19 11:43:17 maca25400702875 puppet-agent[10701]: (/Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns) Error: unable to connect to pcsd on pcmk-maca25400702877
Dec 19 11:43:17 maca25400702875 puppet-agent[10701]: (/Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns) Unable to connect to pcmk-maca25400702877 ([Errno 111] Connection refused)
Dec 19 11:43:17 maca25400702875 puppet-agent[10701]: (/Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns) pcmk-maca25400702876: Authorized
Dec 19 11:43:17 maca25400702875 puppet-agent[10701]: /usr/sbin/pcs cluster auth pcmk-maca25400702876 pcmk-maca25400702877 pcmk-maca25400702875 -u hacluster -p CHANGEME --force returned 1 instead of one of [0]
Dec 19 11:43:17 maca25400702875 puppet-agent[10701]: (/Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns) change from notrun to 0 failed: /usr/sbin/pcs cluster auth pcmk-maca25400702876 pcmk-maca25400702877 pcmk-maca25400702875 -u hacluster -p CHANGEME --force returned 1 instead of one of [0]

Comment 22 Mike Burns 2015-01-05 16:54:34 UTC
Per comment 21, moving this to ON_QA, please file a new bug for the new issue.

Comment 23 Alexander Chuzhoy 2015-01-06 20:00:10 UTC
Verified: FailedQA


Environment:
openstack-foreman-installer-3.0.8-1.el7ost.noarch
openstack-puppet-modules-2014.2.7-2.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
rhel-osp-installer-client-0.5.4-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.9-1.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch

Comment 24 Alexander Chuzhoy 2015-01-06 20:01:35 UTC
yaml from controller1:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCw0/tymHuLle7rBQjv9JJeJ4BLQ2A92Nk9zDFCqHZwJLzzJJTJ7O1WDtZmEISqWTVTdem1Wn7C89q2MDOhSUhDQ8Y9hFdvCmmZpWy9N8OZWz2mmrth+U7auZjjgmqqiVjlfgfbViwrPF7dnh+pLL//9I4Mncqtty6tmCMpEbc7LGO/W30lE2pG0Q8N1qm2gOhqM5PAAcf7iyEwDIltlWkjdBQYUKIuaZii1KB6Ee8bQo+Jx0H/Ye9DZUSr/5PkMgnHe/cMRaZUwHR3S3XR8dJqxQImBjeqcWKmB9BuXcDlLtLfMji5ymVRstOKX7gb9RLwq4KTFCaLAwrvu2ym+msh
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: a77a6df4cf3fcbc355e0952232097cc5
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'false'
    backend_nfs_name: nfs
    backend_rbd: 'true'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - ''
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: d8f0b640-c0ba-4c27-8f31-b3238c8bbaa6
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: 10.19.143.61
    fence_ipmilan_expose_lanplus: 'true'
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: 100yard-
    fence_ipmilan_username: root
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: fence_ipmilan
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: 100yard-
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 172.55.55.53
    - 172.55.55.54
    - 172.55.55.55
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: rbd
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: ''
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'false'
    pcmk_fs_options: ''
    pcmk_fs_type: ''
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: dc85c64c75b5482ad6b5b0f1122b8476
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin.eng.bos.redhat.com
    admin_password: 100yard-
    admin_tenant: admin
    admin_token: 100yard-
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'true'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vxlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ex:eth2
    ovs_tunnel_iface: enp2s0f0
    ovs_tunnel_network: ''
    ovs_tunnel_types:
    - vxlan
    ovs_vlan_ranges:
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 100yard-
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 172.55.55.80
    ceilometer_admin_vip: 172.55.55.56
    ceilometer_group: ceilometer
    ceilometer_private_vip: 172.55.55.58
    ceilometer_public_vip: 10.19.136.176
    ceilometer_user_password: 100yard-
    ceph_cluster_network: ''
    ceph_fsid: 2bfea4f0-d52d-49ba-b91f-85bfab95dc9c
    ceph_images_key: AQDQwJlU8HQzCBAAIKAr+pnB1L6Q5Q5cgPmnIQ==
    ceph_mon_host:
    - 192.0.100.178
    - 192.0.100.176
    - 192.0.100.177
    ceph_mon_initial_members:
    - macc81f6665334f
    - macc81f66653342
    - macc81f6665335c
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 172.55.48.0/21
    ceph_volumes_key: AQDQwJlU2PPmBhAAuAXbiF5g9o1Av4l2XJDfMQ==
    cinder_admin_vip: 172.55.55.59
    cinder_db_password: 100yard-
    cinder_group: cinder
    cinder_private_vip: 172.55.55.60
    cinder_public_vip: 10.19.136.177
    cinder_user_password: 100yard-
    cluster_control_ip: 172.55.55.53
    db_group: db
    db_vip: 172.55.55.61
    glance_admin_vip: 172.55.55.62
    glance_db_password: 100yard-
    glance_group: glance
    glance_private_vip: 172.55.55.67
    glance_public_vip: 10.19.136.178
    glance_user_password: 100yard-
    heat_admin_vip: 172.55.55.68
    heat_auth_encryption_key: 955fa26c7d9689147ed925e8c84ee4c8
    heat_cfn_admin_vip: 172.55.55.70
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 172.55.55.71
    heat_cfn_public_vip: 10.19.136.180
    heat_cfn_user_password: 100yard-
    heat_cloudwatch_enabled: 'true'
    heat_db_password: 100yard-
    heat_group: heat
    heat_private_vip: 172.55.55.69
    heat_public_vip: 10.19.136.179
    heat_user_password: 100yard-
    horizon_admin_vip: 172.55.55.72
    horizon_group: horizon
    horizon_private_vip: 172.55.55.73
    horizon_public_vip: 10.19.136.181
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 172.55.55.74
    keystone_db_password: 100yard-
    keystone_group: keystone
    keystone_private_vip: 172.55.55.75
    keystone_public_vip: 10.19.136.182
    keystone_user_password: 100yard-
    lb_backend_server_addrs:
    - 172.55.55.53
    - 172.55.55.54
    - 172.55.55.55
    lb_backend_server_names:
    - lb-backend-macc81f6665334f
    - lb-backend-macc81f66653342
    - lb-backend-macc81f6665335c
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 10.19.136.183
    neutron: 'true'
    neutron_admin_vip: 172.55.55.76
    neutron_db_password: 100yard-
    neutron_group: neutron
    neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
    neutron_private_vip: 172.55.55.77
    neutron_public_vip: 10.19.136.184
    neutron_user_password: 100yard-
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 172.55.55.78
    nova_db_password: 100yard-
    nova_group: nova
    nova_private_vip: 172.55.55.79
    nova_public_vip: 10.19.136.185
    nova_user_password: 100yard-
    pcmk_iface: ''
    pcmk_ip: 55.0.100.177
    pcmk_network: ''
    pcmk_server_addrs:
    - 55.0.100.178
    - 55.0.100.177
    - 55.0.100.176
    pcmk_server_names:
    - pcmk-macc81f6665334f
    - pcmk-macc81f66653342
    - pcmk-macc81f6665335c
    private_iface: ''
    private_ip: 172.55.55.54
    private_network: ''
    swift_group: swift
    swift_public_vip: 10.19.136.186
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: 83c4ddf693930f7d0d909e6cf91fcdca
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: spina2.cloud.lab.eng.bos.redhat.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/demo/Controller
  root_pw: $1$8yCKZJRa$BV3g4N2k.A2560SxIAUXY.
  puppet_ca: spina2.cloud.lab.eng.bos.redhat.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root.eng.bos.redhat.com
  ip: 172.55.55.54
  mac: c8:1f:66:65:33:42
  ntp-server: 10.16.255.2
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCw0/tymHuLle7rBQjv9JJeJ4BLQ2A92Nk9zDFCqHZwJLzzJJTJ7O1WDtZmEISqWTVTdem1Wn7C89q2MDOhSUhDQ8Y9hFdvCmmZpWy9N8OZWz2mmrth+U7auZjjgmqqiVjlfgfbViwrPF7dnh+pLL//9I4Mncqtty6tmCMpEbc7LGO/W30lE2pG0Q8N1qm2gOhqM5PAAcf7iyEwDIltlWkjdBQYUKIuaZii1KB6Ee8bQo+Jx0H/Ye9DZUSr/5PkMgnHe/cMRaZUwHR3S3XR8dJqxQImBjeqcWKmB9BuXcDlLtLfMji5ymVRstOKX7gb9RLwq4KTFCaLAwrvu2ym+msh
  time-zone: UTC
  ui::ceph::fsid: 2bfea4f0-d52d-49ba-b91f-85bfab95dc9c
  ui::ceph::images_key: AQDQwJlU8HQzCBAAIKAr+pnB1L6Q5Q5cgPmnIQ==
  ui::ceph::volumes_key: AQDQwJlU2PPmBhAAuAXbiF5g9o1Av4l2XJDfMQ==
  ui::cinder::backend_ceph: 'true'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'false'
  ui::cinder::rbd_secret_uuid: d8f0b640-c0ba-4c27-8f31-b3238c8bbaa6
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: ceph
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: af22b9d37056475d472f6a3604057d6e
  ui::passwords::amqp: 28364ae158829bfe868391781363f1c5
  ui::passwords::ceilometer_metering_secret: a77a6df4cf3fcbc355e0952232097cc5
  ui::passwords::ceilometer_user: 3ef66815ac389d298aa5023b629d6399
  ui::passwords::cinder_db: eeac7bb34967a7c79b2409e54c201188
  ui::passwords::cinder_user: c281c4634c9fccf2f0b58da5b0e69185
  ui::passwords::glance_db: 25e5541992f7cc8c2c75ab1d7aca7ad5
  ui::passwords::glance_user: e4af0fbd4af0a37fbc5e7ecb63526e0e
  ui::passwords::heat_auth_encrypt_key: 955fa26c7d9689147ed925e8c84ee4c8
  ui::passwords::heat_cfn_user: ea38784158eb420109a3e3badb300eb8
  ui::passwords::heat_db: bc6b72a10601615e97c1e76a961a3ab5
  ui::passwords::heat_user: 472298ec3e5a68f78f75a5a25af191b9
  ui::passwords::horizon_secret_key: dc85c64c75b5482ad6b5b0f1122b8476
  ui::passwords::keystone_admin_token: 991ef0869c4537a89484b8c84573a1dc
  ui::passwords::keystone_db: e3b0d11501d6c0c9fd25a66056f8da5e
  ui::passwords::keystone_user: c663a8b1e4a6f83756f49461db3336ab
  ui::passwords::mode: single
  ui::passwords::mysql_root: 6e06ec7de14b2a8d35f154928761eb7b
  ui::passwords::neutron_db: 1473a34e2544cd1b7806c5ef055cad1e
  ui::passwords::neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
  ui::passwords::neutron_user: 92f2f3ddf4f6eebeb5b28ee4694c4b3e
  ui::passwords::nova_db: ccc6e51fc22899bb3777b98354db7055
  ui::passwords::nova_user: f87f3ece1853e30250834f05598f45f7
  ui::passwords::single_password: 100yard-
  ui::passwords::swift_shared_secret: 83c4ddf693930f7d0d909e6cf91fcdca
  ui::passwords::swift_user: fa421bb1a98aab824dd7290400d108bb
environment: production

Comment 25 Alexander Chuzhoy 2015-01-06 20:02:11 UTC
yaml from controller2:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCw0/tymHuLle7rBQjv9JJeJ4BLQ2A92Nk9zDFCqHZwJLzzJJTJ7O1WDtZmEISqWTVTdem1Wn7C89q2MDOhSUhDQ8Y9hFdvCmmZpWy9N8OZWz2mmrth+U7auZjjgmqqiVjlfgfbViwrPF7dnh+pLL//9I4Mncqtty6tmCMpEbc7LGO/W30lE2pG0Q8N1qm2gOhqM5PAAcf7iyEwDIltlWkjdBQYUKIuaZii1KB6Ee8bQo+Jx0H/Ye9DZUSr/5PkMgnHe/cMRaZUwHR3S3XR8dJqxQImBjeqcWKmB9BuXcDlLtLfMji5ymVRstOKX7gb9RLwq4KTFCaLAwrvu2ym+msh
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: a77a6df4cf3fcbc355e0952232097cc5
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'false'
    backend_nfs_name: nfs
    backend_rbd: 'true'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - ''
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: d8f0b640-c0ba-4c27-8f31-b3238c8bbaa6
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: 10.19.143.63
    fence_ipmilan_expose_lanplus: 'true'
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: 100yard-
    fence_ipmilan_username: root
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: fence_ipmilan
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: 100yard-
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 172.55.55.53
    - 172.55.55.54
    - 172.55.55.55
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: rbd
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: ''
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'false'
    pcmk_fs_options: ''
    pcmk_fs_type: ''
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: dc85c64c75b5482ad6b5b0f1122b8476
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin.eng.bos.redhat.com
    admin_password: 100yard-
    admin_tenant: admin
    admin_token: 100yard-
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'true'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vxlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ex:eth2
    ovs_tunnel_iface: enp2s0f0
    ovs_tunnel_network: ''
    ovs_tunnel_types:
    - vxlan
    ovs_vlan_ranges:
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 100yard-
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 172.55.55.80
    ceilometer_admin_vip: 172.55.55.56
    ceilometer_group: ceilometer
    ceilometer_private_vip: 172.55.55.58
    ceilometer_public_vip: 10.19.136.176
    ceilometer_user_password: 100yard-
    ceph_cluster_network: ''
    ceph_fsid: 2bfea4f0-d52d-49ba-b91f-85bfab95dc9c
    ceph_images_key: AQDQwJlU8HQzCBAAIKAr+pnB1L6Q5Q5cgPmnIQ==
    ceph_mon_host:
    - 192.0.100.178
    - 192.0.100.176
    - 192.0.100.177
    ceph_mon_initial_members:
    - macc81f6665334f
    - macc81f66653342
    - macc81f6665335c
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 172.55.48.0/21
    ceph_volumes_key: AQDQwJlU2PPmBhAAuAXbiF5g9o1Av4l2XJDfMQ==
    cinder_admin_vip: 172.55.55.59
    cinder_db_password: 100yard-
    cinder_group: cinder
    cinder_private_vip: 172.55.55.60
    cinder_public_vip: 10.19.136.177
    cinder_user_password: 100yard-
    cluster_control_ip: 172.55.55.53
    db_group: db
    db_vip: 172.55.55.61
    glance_admin_vip: 172.55.55.62
    glance_db_password: 100yard-
    glance_group: glance
    glance_private_vip: 172.55.55.67
    glance_public_vip: 10.19.136.178
    glance_user_password: 100yard-
    heat_admin_vip: 172.55.55.68
    heat_auth_encryption_key: 955fa26c7d9689147ed925e8c84ee4c8
    heat_cfn_admin_vip: 172.55.55.70
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 172.55.55.71
    heat_cfn_public_vip: 10.19.136.180
    heat_cfn_user_password: 100yard-
    heat_cloudwatch_enabled: 'true'
    heat_db_password: 100yard-
    heat_group: heat
    heat_private_vip: 172.55.55.69
    heat_public_vip: 10.19.136.179
    heat_user_password: 100yard-
    horizon_admin_vip: 172.55.55.72
    horizon_group: horizon
    horizon_private_vip: 172.55.55.73
    horizon_public_vip: 10.19.136.181
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 172.55.55.74
    keystone_db_password: 100yard-
    keystone_group: keystone
    keystone_private_vip: 172.55.55.75
    keystone_public_vip: 10.19.136.182
    keystone_user_password: 100yard-
    lb_backend_server_addrs:
    - 172.55.55.53
    - 172.55.55.54
    - 172.55.55.55
    lb_backend_server_names:
    - lb-backend-macc81f6665334f
    - lb-backend-macc81f66653342
    - lb-backend-macc81f6665335c
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 10.19.136.183
    neutron: 'true'
    neutron_admin_vip: 172.55.55.76
    neutron_db_password: 100yard-
    neutron_group: neutron
    neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
    neutron_private_vip: 172.55.55.77
    neutron_public_vip: 10.19.136.184
    neutron_user_password: 100yard-
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 172.55.55.78
    nova_db_password: 100yard-
    nova_group: nova
    nova_private_vip: 172.55.55.79
    nova_public_vip: 10.19.136.185
    nova_user_password: 100yard-
    pcmk_iface: ''
    pcmk_ip: 55.0.100.176
    pcmk_network: ''
    pcmk_server_addrs:
    - 55.0.100.178
    - 55.0.100.177
    - 55.0.100.176
    pcmk_server_names:
    - pcmk-macc81f6665334f
    - pcmk-macc81f66653342
    - pcmk-macc81f6665335c
    private_iface: ''
    private_ip: 172.55.55.55
    private_network: ''
    swift_group: swift
    swift_public_vip: 10.19.136.186
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: 83c4ddf693930f7d0d909e6cf91fcdca
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: spina2.cloud.lab.eng.bos.redhat.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/demo/Controller
  root_pw: $1$8yCKZJRa$BV3g4N2k.A2560SxIAUXY.
  puppet_ca: spina2.cloud.lab.eng.bos.redhat.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root.eng.bos.redhat.com
  ip: 172.55.55.55
  mac: c8:1f:66:65:33:5c
  ntp-server: 10.16.255.2
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCw0/tymHuLle7rBQjv9JJeJ4BLQ2A92Nk9zDFCqHZwJLzzJJTJ7O1WDtZmEISqWTVTdem1Wn7C89q2MDOhSUhDQ8Y9hFdvCmmZpWy9N8OZWz2mmrth+U7auZjjgmqqiVjlfgfbViwrPF7dnh+pLL//9I4Mncqtty6tmCMpEbc7LGO/W30lE2pG0Q8N1qm2gOhqM5PAAcf7iyEwDIltlWkjdBQYUKIuaZii1KB6Ee8bQo+Jx0H/Ye9DZUSr/5PkMgnHe/cMRaZUwHR3S3XR8dJqxQImBjeqcWKmB9BuXcDlLtLfMji5ymVRstOKX7gb9RLwq4KTFCaLAwrvu2ym+msh
  time-zone: UTC
  ui::ceph::fsid: 2bfea4f0-d52d-49ba-b91f-85bfab95dc9c
  ui::ceph::images_key: AQDQwJlU8HQzCBAAIKAr+pnB1L6Q5Q5cgPmnIQ==
  ui::ceph::volumes_key: AQDQwJlU2PPmBhAAuAXbiF5g9o1Av4l2XJDfMQ==
  ui::cinder::backend_ceph: 'true'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'false'
  ui::cinder::rbd_secret_uuid: d8f0b640-c0ba-4c27-8f31-b3238c8bbaa6
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: ceph
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: af22b9d37056475d472f6a3604057d6e
  ui::passwords::amqp: 28364ae158829bfe868391781363f1c5
  ui::passwords::ceilometer_metering_secret: a77a6df4cf3fcbc355e0952232097cc5
  ui::passwords::ceilometer_user: 3ef66815ac389d298aa5023b629d6399
  ui::passwords::cinder_db: eeac7bb34967a7c79b2409e54c201188
  ui::passwords::cinder_user: c281c4634c9fccf2f0b58da5b0e69185
  ui::passwords::glance_db: 25e5541992f7cc8c2c75ab1d7aca7ad5
  ui::passwords::glance_user: e4af0fbd4af0a37fbc5e7ecb63526e0e
  ui::passwords::heat_auth_encrypt_key: 955fa26c7d9689147ed925e8c84ee4c8
  ui::passwords::heat_cfn_user: ea38784158eb420109a3e3badb300eb8
  ui::passwords::heat_db: bc6b72a10601615e97c1e76a961a3ab5
  ui::passwords::heat_user: 472298ec3e5a68f78f75a5a25af191b9
  ui::passwords::horizon_secret_key: dc85c64c75b5482ad6b5b0f1122b8476
  ui::passwords::keystone_admin_token: 991ef0869c4537a89484b8c84573a1dc
  ui::passwords::keystone_db: e3b0d11501d6c0c9fd25a66056f8da5e
  ui::passwords::keystone_user: c663a8b1e4a6f83756f49461db3336ab
  ui::passwords::mode: single
  ui::passwords::mysql_root: 6e06ec7de14b2a8d35f154928761eb7b
  ui::passwords::neutron_db: 1473a34e2544cd1b7806c5ef055cad1e
  ui::passwords::neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
  ui::passwords::neutron_user: 92f2f3ddf4f6eebeb5b28ee4694c4b3e
  ui::passwords::nova_db: ccc6e51fc22899bb3777b98354db7055
  ui::passwords::nova_user: f87f3ece1853e30250834f05598f45f7
  ui::passwords::single_password: 100yard-
  ui::passwords::swift_shared_secret: 83c4ddf693930f7d0d909e6cf91fcdca
  ui::passwords::swift_user: fa421bb1a98aab824dd7290400d108bb
environment: production

Comment 26 Alexander Chuzhoy 2015-01-06 20:03:02 UTC
yaml from controller3:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCw0/tymHuLle7rBQjv9JJeJ4BLQ2A92Nk9zDFCqHZwJLzzJJTJ7O1WDtZmEISqWTVTdem1Wn7C89q2MDOhSUhDQ8Y9hFdvCmmZpWy9N8OZWz2mmrth+U7auZjjgmqqiVjlfgfbViwrPF7dnh+pLL//9I4Mncqtty6tmCMpEbc7LGO/W30lE2pG0Q8N1qm2gOhqM5PAAcf7iyEwDIltlWkjdBQYUKIuaZii1KB6Ee8bQo+Jx0H/Ye9DZUSr/5PkMgnHe/cMRaZUwHR3S3XR8dJqxQImBjeqcWKmB9BuXcDlLtLfMji5ymVRstOKX7gb9RLwq4KTFCaLAwrvu2ym+msh
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: a77a6df4cf3fcbc355e0952232097cc5
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'false'
    backend_nfs_name: nfs
    backend_rbd: 'true'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - ''
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: d8f0b640-c0ba-4c27-8f31-b3238c8bbaa6
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: 10.19.143.62
    fence_ipmilan_expose_lanplus: 'true'
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: 100yard-
    fence_ipmilan_username: root
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: fence_ipmilan
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: 100yard-
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 172.55.55.53
    - 172.55.55.54
    - 172.55.55.55
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: rbd
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: ''
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'false'
    pcmk_fs_options: ''
    pcmk_fs_type: ''
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: dc85c64c75b5482ad6b5b0f1122b8476
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin.eng.bos.redhat.com
    admin_password: 100yard-
    admin_tenant: admin
    admin_token: 100yard-
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'true'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vxlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ex:eth2
    ovs_tunnel_iface: enp2s0f0
    ovs_tunnel_network: ''
    ovs_tunnel_types:
    - vxlan
    ovs_vlan_ranges:
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 100yard-
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 172.55.55.80
    ceilometer_admin_vip: 172.55.55.56
    ceilometer_group: ceilometer
    ceilometer_private_vip: 172.55.55.58
    ceilometer_public_vip: 10.19.136.176
    ceilometer_user_password: 100yard-
    ceph_cluster_network: ''
    ceph_fsid: 2bfea4f0-d52d-49ba-b91f-85bfab95dc9c
    ceph_images_key: AQDQwJlU8HQzCBAAIKAr+pnB1L6Q5Q5cgPmnIQ==
    ceph_mon_host:
    - 192.0.100.178
    - 192.0.100.176
    - 192.0.100.177
    ceph_mon_initial_members:
    - macc81f6665334f
    - macc81f66653342
    - macc81f6665335c
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 172.55.48.0/21
    ceph_volumes_key: AQDQwJlU2PPmBhAAuAXbiF5g9o1Av4l2XJDfMQ==
    cinder_admin_vip: 172.55.55.59
    cinder_db_password: 100yard-
    cinder_group: cinder
    cinder_private_vip: 172.55.55.60
    cinder_public_vip: 10.19.136.177
    cinder_user_password: 100yard-
    cluster_control_ip: 172.55.55.53
    db_group: db
    db_vip: 172.55.55.61
    glance_admin_vip: 172.55.55.62
    glance_db_password: 100yard-
    glance_group: glance
    glance_private_vip: 172.55.55.67
    glance_public_vip: 10.19.136.178
    glance_user_password: 100yard-
    heat_admin_vip: 172.55.55.68
    heat_auth_encryption_key: 955fa26c7d9689147ed925e8c84ee4c8
    heat_cfn_admin_vip: 172.55.55.70
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 172.55.55.71
    heat_cfn_public_vip: 10.19.136.180
    heat_cfn_user_password: 100yard-
    heat_cloudwatch_enabled: 'true'
    heat_db_password: 100yard-
    heat_group: heat
    heat_private_vip: 172.55.55.69
    heat_public_vip: 10.19.136.179
    heat_user_password: 100yard-
    horizon_admin_vip: 172.55.55.72
    horizon_group: horizon
    horizon_private_vip: 172.55.55.73
    horizon_public_vip: 10.19.136.181
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 172.55.55.74
    keystone_db_password: 100yard-
    keystone_group: keystone
    keystone_private_vip: 172.55.55.75
    keystone_public_vip: 10.19.136.182
    keystone_user_password: 100yard-
    lb_backend_server_addrs:
    - 172.55.55.53
    - 172.55.55.54
    - 172.55.55.55
    lb_backend_server_names:
    - lb-backend-macc81f6665334f
    - lb-backend-macc81f66653342
    - lb-backend-macc81f6665335c
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 10.19.136.183
    neutron: 'true'
    neutron_admin_vip: 172.55.55.76
    neutron_db_password: 100yard-
    neutron_group: neutron
    neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
    neutron_private_vip: 172.55.55.77
    neutron_public_vip: 10.19.136.184
    neutron_user_password: 100yard-
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 172.55.55.78
    nova_db_password: 100yard-
    nova_group: nova
    nova_private_vip: 172.55.55.79
    nova_public_vip: 10.19.136.185
    nova_user_password: 100yard-
    pcmk_iface: ''
    pcmk_ip: 55.0.100.178
    pcmk_network: ''
    pcmk_server_addrs:
    - 55.0.100.178
    - 55.0.100.177
    - 55.0.100.176
    pcmk_server_names:
    - pcmk-macc81f6665334f
    - pcmk-macc81f66653342
    - pcmk-macc81f6665335c
    private_iface: ''
    private_ip: 172.55.55.53
    private_network: ''
    swift_group: swift
    swift_public_vip: 10.19.136.186
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: 83c4ddf693930f7d0d909e6cf91fcdca
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: spina2.cloud.lab.eng.bos.redhat.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/demo/Controller
  root_pw: $1$8yCKZJRa$BV3g4N2k.A2560SxIAUXY.
  puppet_ca: spina2.cloud.lab.eng.bos.redhat.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root.eng.bos.redhat.com
  ip: 172.55.55.53
  mac: c8:1f:66:65:33:4f
  ntp-server: 10.16.255.2
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCw0/tymHuLle7rBQjv9JJeJ4BLQ2A92Nk9zDFCqHZwJLzzJJTJ7O1WDtZmEISqWTVTdem1Wn7C89q2MDOhSUhDQ8Y9hFdvCmmZpWy9N8OZWz2mmrth+U7auZjjgmqqiVjlfgfbViwrPF7dnh+pLL//9I4Mncqtty6tmCMpEbc7LGO/W30lE2pG0Q8N1qm2gOhqM5PAAcf7iyEwDIltlWkjdBQYUKIuaZii1KB6Ee8bQo+Jx0H/Ye9DZUSr/5PkMgnHe/cMRaZUwHR3S3XR8dJqxQImBjeqcWKmB9BuXcDlLtLfMji5ymVRstOKX7gb9RLwq4KTFCaLAwrvu2ym+msh
  time-zone: UTC
  ui::ceph::fsid: 2bfea4f0-d52d-49ba-b91f-85bfab95dc9c
  ui::ceph::images_key: AQDQwJlU8HQzCBAAIKAr+pnB1L6Q5Q5cgPmnIQ==
  ui::ceph::volumes_key: AQDQwJlU2PPmBhAAuAXbiF5g9o1Av4l2XJDfMQ==
  ui::cinder::backend_ceph: 'true'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'false'
  ui::cinder::rbd_secret_uuid: d8f0b640-c0ba-4c27-8f31-b3238c8bbaa6
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: ceph
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: af22b9d37056475d472f6a3604057d6e
  ui::passwords::amqp: 28364ae158829bfe868391781363f1c5
  ui::passwords::ceilometer_metering_secret: a77a6df4cf3fcbc355e0952232097cc5
  ui::passwords::ceilometer_user: 3ef66815ac389d298aa5023b629d6399
  ui::passwords::cinder_db: eeac7bb34967a7c79b2409e54c201188
  ui::passwords::cinder_user: c281c4634c9fccf2f0b58da5b0e69185
  ui::passwords::glance_db: 25e5541992f7cc8c2c75ab1d7aca7ad5
  ui::passwords::glance_user: e4af0fbd4af0a37fbc5e7ecb63526e0e
  ui::passwords::heat_auth_encrypt_key: 955fa26c7d9689147ed925e8c84ee4c8
  ui::passwords::heat_cfn_user: ea38784158eb420109a3e3badb300eb8
  ui::passwords::heat_db: bc6b72a10601615e97c1e76a961a3ab5
  ui::passwords::heat_user: 472298ec3e5a68f78f75a5a25af191b9
  ui::passwords::horizon_secret_key: dc85c64c75b5482ad6b5b0f1122b8476
  ui::passwords::keystone_admin_token: 991ef0869c4537a89484b8c84573a1dc
  ui::passwords::keystone_db: e3b0d11501d6c0c9fd25a66056f8da5e
  ui::passwords::keystone_user: c663a8b1e4a6f83756f49461db3336ab
  ui::passwords::mode: single
  ui::passwords::mysql_root: 6e06ec7de14b2a8d35f154928761eb7b
  ui::passwords::neutron_db: 1473a34e2544cd1b7806c5ef055cad1e
  ui::passwords::neutron_metadata_proxy_secret: 3e85916cf9d29382d0960f2ee081bab4
  ui::passwords::neutron_user: 92f2f3ddf4f6eebeb5b28ee4694c4b3e
  ui::passwords::nova_db: ccc6e51fc22899bb3777b98354db7055
  ui::passwords::nova_user: f87f3ece1853e30250834f05598f45f7
  ui::passwords::single_password: 100yard-
  ui::passwords::swift_shared_secret: 83c4ddf693930f7d0d909e6cf91fcdca
  ui::passwords::swift_user: fa421bb1a98aab824dd7290400d108bb
environment: production

Comment 27 Alexander Chuzhoy 2015-01-06 20:04:05 UTC
Created attachment 976922 [details]
foreman logs

Comment 28 Alexander Chuzhoy 2015-01-06 20:05:25 UTC
Created attachment 976923 [details]
logs - controller1

Comment 29 Alexander Chuzhoy 2015-01-06 20:07:20 UTC
Created attachment 976924 [details]
logs - controller2

Comment 30 Alexander Chuzhoy 2015-01-06 20:08:27 UTC
Created attachment 976925 [details]
logs - controller3

Comment 31 Jiri Stransky 2015-01-07 15:42:53 UTC
Submitted one more amendment to ensure that the lockfile is removed in case puppet needs to be killed by puppetssh:

https://github.com/theforeman/foreman-installer-staypuft/pull/129

Comment 32 Alexander Chuzhoy 2015-01-07 16:02:04 UTC
This error appeared in the puppet report, when the issue reproduced:
No route to host - connect(2) at  110:/etc/puppet/environments/production/modules/neutron/manifests/server/notifications.pp

Comment 33 Jiri Stransky 2015-01-08 14:09:31 UTC
Merged upstream.

Comment 36 Omri Hochman 2015-01-14 19:54:48 UTC
Failed-qa  :  We got this bug reproduced on two different environment (neutron-gre-ha) . 

In both cases we saw that: on 1 out of 3 controllers - the puppet agent wasn't triggered even once after the OS was provisioned --> that caused 'pcs' not to be installed on this controller - which reports the following error (reported in comment #21) : 

change from notrun to 0 failed: /usr/sbin/pcs cluster auth pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 -u hacluster -p CHANGEME --force returned 1 instead of one of [0]

Despite this error, since puppet was not triggered once (or rerun 3 times) the deployment status in staypuft GUI remains 'Deploying..' for 17 hours.  



Environment:
-------------
rhel-osp-installer-0.5.5-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.11-1.el7ost.noarch
foreman-1.6.0.49-4.el7ost.noarch
rubygem-foreman_api-0.1.11-6.el7sat.noarch
openstack-foreman-installer-3.0.9-1.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
foreman-installer-1.6.0-0.2.RC1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
puppet-3.6.2-2.el7.noarch
puppet-server-3.6.2-2.el7.noarch

Comment 37 Omri Hochman 2015-01-14 20:01:39 UTC
dynflow view :
----------------
Note:
- Task 54 is 'suspended'
- Task 62 is 'pending' 

49: Actions::Staypuft::Host::ReportWait (success) [ 4706.58s / 10.03s ]
52: Actions::Staypuft::Host::PuppetRun (success) [ 0.02s / 0.02s ]
54: Actions::Staypuft::Host::ReportWait (suspended) [ 12850.11s / 24.37s ]
57: Actions::Staypuft::Host::PuppetRun (success) [ 0.32s / 0.32s ]
59: Actions::Staypuft::Host::ReportWait (success) [ 4761.79s / 9.13s ]
62: Actions::Staypuft::Host::PuppetRun (pending)
64: Actions::Staypuft::Host::ReportWait (pending)


Task 54 :
---------
54: Actions::Staypuft::Host::ReportWait (suspended) [ 12850.11s / 24.37s ]
Started at: 2015-01-14 15:49:49 UTC
Ended at: 2015-01-14 19:23:59 UTC
Real time: 12850.11s
Execution time (excluding suspended state): 24.37s

Input:
---
host_id: 3
after: '2015-01-14T10:49:49-05:00'
current_user_id: 3

Output:
---
status: false
poll_attempts:
  total: 2563
  failed: 0

Task 62:
---------
Started at:
Ended at:
Real time: 0.00
Execution time (excluding suspended state): 0.00s

Input:
---
host_id: 4
name: maca25400702877.example.com
current_user_id: 3

Output:
---
 {}

Comment 38 Omri Hochman 2015-01-14 20:13:10 UTC
Created attachment 980204 [details]
production.log

Adding production.log :
hoping it will help us understand why puppet wasn't triggered even once on that controller - after the provisioned was over.

BTW puppet is installed on that machine.
rpm -qa | grep puppet
puppet-3.6.2-2.el7.noarch

Comment 39 Jiri Stransky 2015-01-15 10:28:11 UTC
Investigated one of the environments. It seems the cause might be yet another race condition. We check if a host is ready after provisioning and reboot by looking if the ssh port is open, but that doesn't seem to be enough. Sometimes even if the ssh port is open, the ssh connection for puppetssh apparently won't get established.

Foreman Proxy only allows us to trigger puppetssh runs on the host and not run arbitrary commands and see their result, so we'll have to fix this by giving the hosts some time to finish booting up after we detect that ssh port is open.

Even in the case where the puppet run *does* work, the first puppetssh connection is established ("Started Session 1 of user root." in syslog) 30 seconds before systemd prints "Reached target Multi-User System.". So giving the hosts some additional time would be desirable it seems.

Comment 40 Jiri Stransky 2015-01-15 10:29:25 UTC
Created attachment 980405 [details]
startup_race_console.html

Comment 41 Jiri Stransky 2015-01-15 10:30:16 UTC
Created attachment 980406 [details]
startup_race_messages_nok

Comment 42 Jiri Stransky 2015-01-15 10:30:52 UTC
Created attachment 980407 [details]
startup_race_messages_ok

Comment 43 Jiri Stransky 2015-01-15 10:37:29 UTC
Attached some log files from the deployment in question. (The deployer triggered puppet run on the problematic host manually.)

Comment 44 Mike Burns 2015-01-15 14:43:04 UTC
second issue has been moved to bug 1182581

Comment 45 Jiri Stransky 2015-01-15 18:43:12 UTC
Fix for the issue i described in #c39 is submitted as a pull request:

https://github.com/theforeman/staypuft/pull/406

Comment 47 Omri Hochman 2015-01-16 15:18:25 UTC
Unable to reproduce with: ruby193-rubygem-staypuft-0.5.12-1.el7ost.noarch

Comment 49 errata-xmlrpc 2015-02-09 15:18:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0156.html


Note You need to log in before you can comment on or make changes to this bug.