Bug 1389926 - [osp-director-10]: Upgrade from 9 to 10 with no SSL fails during the CONVERGENCE STEP (overcloud-AllNodesDeploySteps)
Summary: [osp-director-10]: Upgrade from 9 to 10 with no SSL fails during the CONVERGE...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-common
Version: 10.0 (Newton)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: 10.0 (Newton)
Assignee: Sofer Athlan-Guyot
QA Contact: Omri Hochman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-29 13:55 UTC by Omri Hochman
Modified: 2016-12-29 16:57 UTC (History)
11 users (show)

Fixed In Version: openstack-tripleo-common-5.3.0-4.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-14 16:26:27 UTC


Attachments (Terms of Use)
messages (442.82 KB, application/x-bzip)
2016-10-29 14:05 UTC, Omri Hochman
no flags Details
heat-engine.log (2.91 MB, application/x-bzip)
2016-10-29 14:06 UTC, Omri Hochman
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:2948 normal SHIPPED_LIVE Red Hat OpenStack Platform 10 enhancement update 2016-12-14 19:55:27 UTC
OpenStack gerrit 394195 None None None 2016-11-07 16:46:48 UTC
Launchpad 1638003 None None None 2016-11-02 17:02:15 UTC

Description Omri Hochman 2016-10-29 13:55:47 UTC
[osp-director-10]: Upgrade from 9 to 10 no SSL fails on CONVERGENCE STEP  (overcloud-AllNodesDeploySteps)


Environment ( undercloud ): 
---------------------------
instack-5.0.0-1.el7ost.noarch
instack-undercloud-5.0.0-2.el7ost.noarch
openstack-tripleo-heat-templates-5.0.0-0.8.0rc3.el7ost.noarch
python-uri-templates-0.6-5.el7ost.noarch
openstack-heat-templates-0-0.5.1e6015dgit.el7ost.noarch
openstack-tripleo-heat-templates-compat-2.0.0-34.3.el7ost.noarch
[stack@undercloud72 ~]$ rpm -qa | grep heat
openstack-tripleo-heat-templates-5.0.0-0.8.0rc3.el7ost.noarch
python-heat-agent-0-0.5.1e6015dgit.el7ost.noarch
openstack-heat-api-cfn-7.0.0-4.el7ost.noarch
heat-cfntools-1.3.0-2.el7ost.noarch
python-heat-tests-7.0.0-4.el7ost.noarch
puppet-heat-9.4.1-1.el7ost.noarch
openstack-heat-common-7.0.0-4.el7ost.noarch
openstack-heat-engine-7.0.0-4.el7ost.noarch
python-heatclient-1.5.0-1.el7ost.noarch
openstack-heat-templates-0-0.5.1e6015dgit.el7ost.noarch
openstack-heat-api-7.0.0-4.el7ost.noarch


Step : 
-------
Deploy OSP9 without SSL and attempt to upgrade to OSP10 

Results :
---------
Upgrade failed on CONVERGENCE STEP .  


failure view:
--------------
09:30:41 2016-10-29 09:29:18Z [AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_COMPLETE  Stack CREATE completed successfully
09:30:41 2016-10-29 09:29:18Z [AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_COMPLETE  state changed
09:30:41 2016-10-29 09:30:23Z [AllNodesDeploySteps.ControllerDeployment_Step2.1]: SIGNAL_IN_PROGRESS  Signal: deployment ce8c0075-0a3f-4d91-b1b8-cab648712b03 succeeded
09:30:41 2016-10-29 09:30:24Z [AllNodesDeploySteps.ControllerDeployment_Step2.1]: CREATE_COMPLETE  state changed
09:30:41 2016-10-29 09:30:29Z [AllNodesDeploySteps.ControllerDeployment_Step2.2]: SIGNAL_IN_PROGRESS  Signal: deployment 393d13c5-aaa9-4943-8731-1487f71197cf succeeded
09:31:24 2016-10-29 09:30:29Z [AllNodesDeploySteps.ControllerDeployment_Step2.2]: CREATE_COMPLETE  state changHeat Stack update failed.
09:31:24 ed
09:31:24 2016-10-29 09:31:13Z [AllNodesDeploySteps.ControllerDeployment_Step2.0]: SIGNAL_IN_PROGRESS  Signal: deployment 838ba395-c10b-4da1-b52d-6ff2c0884e5d failed (6)
09:31:24 2016-10-29 09:31:14Z [AllNodesDeploySteps.ControllerDeployment_Step2.0]: CREATE_FAILED  Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6
09:31:24 2016-10-29 09:31:14Z [AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_FAILED  Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6
09:31:24 2016-10-29 09:31:15Z [AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_FAILED  Error: resources.ControllerDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
09:31:24 2016-10-29 09:31:15Z [AllNodesDeploySteps]: CREATE_FAILED  Resource CREATE failed: Error: resources.ControllerDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
09:31:24 2016-10-29 09:31:17Z [AllNodesDeploySteps]: CREATE_FAILED  Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
09:31:24 2016-10-29 09:31:17Z [overcloud]: UPDATE_FAILED  Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
09:31:24 
09:31:24  Stack overcloud UPDATE_FAILED 
09:31:24 
09:31:24 There was an error running ### CONVERGENCE STEP ###. Exiting....
09:31:24 Build step 'Virtualenv Builder' marked build as failure
09:31:25 Build step 'Groovy Postbuild' marked build as failure
09:31:25 Build step 'Groovy Postbuild' marked build as failure
09:31:25 [BFA] Scanning build for known causes...
09:31:26 .[BFA] No failure causes found
09:31:26 [BFA] Done. 1s
09:31:26 Started calculate disk usage of build
09:31:26 Finished Calculation of disk usage of build in 0 seconds
09:31:26 Started calculate disk usage of workspace
09:31:26 Finished Calculation of disk usage of workspace in 0 seconds
09:31:26 Finished: FAILURE


[stack@undercloud72 ~]$ heat resource-list overcloud -n 5 | grep -v COMPLETE
WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead
+-------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+
| resource_name                             | physical_resource_id                                                            | resource_type                                                                                                         | resource_status | updated_time         | stack_name                                                                                                           |
+-------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+
| AllNodesDeploySteps                       | 35df363e-6010-4eae-9074-719f879b6cc6                                            | OS::TripleO::PostDeploySteps                                                                                          | CREATE_FAILED   | 2016-10-29T09:20:53Z | overcloud                                                                                                            |
| ControllerDeployment_Step2                | d2145351-344d-43ec-a1a1-d0847443f9b7                                            | OS::Heat::StructuredDeploymentGroup                                                                                   | CREATE_FAILED   | 2016-10-29T09:20:57Z | overcloud-AllNodesDeploySteps-cxwbij6m3ebt                                                                           |
| 0                                         | 838ba395-c10b-4da1-b52d-6ff2c0884e5d                                            | OS::Heat::StructuredDeployment                                                                                        | CREATE_FAILED   | 2016-10-29T09:28:13Z | overcloud-AllNodesDeploySteps-cxwbij6m3ebt-ControllerDeployment_Step2-bjuo6rw42o3i                                   |
+-------------------------------------------+-----------------------------------


[stack@undercloud72 ~]$ heat deployment-show 838ba395-c10b-4da1-b52d-6ff2c0884e5d
WARNING (shell) "heat deployment-show" is deprecated, please use "openstack software deployment show" instead
{
  "status": "FAILED", 
  "server_id": "f0d697a5-1670-4aec-bcf5-807118ee6af7", 
  "config_id": "58a2a9c6-054d-4f31-98e1-ffc3ebcaab30", 
  "output_values": {
    "deploy_stdout": "Matching apachectl 'Server version: Apache/2.4.6 (Red Hat Enterprise Linux)\nServer built:   Aug  3 2016 08:33:27'\n\u001b[mNotice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.\u001b[0m\n\u001b[mNotice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 11.66 seconds\u001b[0m\n\u001b[mNotice: /Stage[setup]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'system_u' to 'unconfined_u'\u001b[0m\n\u001b[mNotice: /File[/etc/sysconfig/iptables]/seltype: seltype changed 'etc_t' to 'system_conf_t'\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_controller_pacemaker2]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.openstack]/File[/etc/ceph/ceph.client.openstack.keyring]/owner: owner changed 'ceph' to 'root'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.openstack]/File[/etc/ceph/ceph.client.openstack.keyring]/group: group changed 'ceph' to 'root'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.bootstrap-osd]/Exec[ceph-key-client.bootstrap-osd]/returns: + ceph-authtool /var/lib/ceph/bootstrap-osd/ceph.keyring --name client.bootstrap-osd --add-key AQC0SBRYAAAAABAAM3hqhw961NrFNbRv2UWChw== --cap mon 'allow profile bootstrap-osd'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.bootstrap-osd]/Exec[ceph-key-client.bootstrap-osd]/returns: added entity client.bootstrap-osd auth auth(auid = 18446744073709551615 key=AQC0SBRYAAAAABAAM3hqhw961NrFNbRv2UWChw== with 0 caps)\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.bootstrap-osd]/Exec[ceph-key-client.bootstrap-osd]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.admin]/File[/etc/ceph/ceph.client.admin.keyring]/owner: owner changed 'ceph' to 'root'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.admin]/File[/etc/ceph/ceph.client.admin.keyring]/group: group changed 'ceph' to 'root'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.openstack]/Exec[ceph-key-client.openstack]/returns: + ceph-authtool /etc/ceph/ceph.client.openstack.keyring --name client.openstack --add-key AQC0SBRYAAAAABAAXDSiIVBwjafAyRxoXBUeCQ== --cap mon 'allow r' --cap osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.openstack]/Exec[ceph-key-client.openstack]/returns: added entity client.openstack auth auth(auid = 18446744073709551615 key=AQC0SBRYAAAAABAAXDSiIVBwjafAyRxoXBUeCQ== with 0 caps)\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.openstack]/Exec[ceph-key-client.openstack]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}b432e3530685b2b53034e4dc1be5193e' to '{md5}7d37008224e71625019cb48768f267e7'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.admin]/Exec[ceph-key-client.admin]/returns: + ceph-authtool /etc/ceph/ceph.client.admin.keyring --name client.admin --add-key AQC0SBRYAAAAABAAM3hqhw961NrFNbRv2UWChw== --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.admin]/Exec[ceph-key-client.admin]/returns: added entity client.admin auth auth(auid = 18446744073709551615 key=AQC0SBRYAAAAABAAM3hqhw961NrFNbRv2UWChw== with 0 caps)\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Keys/Ceph::Key[client.admin]/Exec[ceph-key-client.admin]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift/Package[swift]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift/File[/etc/swift]/owner: owner changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift/File[/etc/swift]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift/File[/var/run/swift]/group: group changed 'root' to 'swift'\u001b[0m\n\u001b[mNotice: /Stage[main]/Xinetd/Service[xinetd]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]/content: content changed '{md5}82e4f375f963d5bec4b9ddfd997e0297' to '{md5}5e969579b403ac9856ce19ba9b8de576'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.7]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.9]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.12]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/ceilometer.ns]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.ns]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.8]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.1]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.5]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.6]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.2]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.4]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.11]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.3]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.0]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/local.10]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/journal/prealloc.1]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/journal/j._0]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/journal/prealloc.2]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb/ceilometer.0]/mode: mode changed '0600' to '0755'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Service/Service[mongodb]/enable: enable changed 'false' to 'true'\u001b[0m\n\u001b[mNotice: /Stage[main]/Mongodb::Server::Service/Service[mongodb]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Deps/Anchor[swift::install::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed 'PuJv7X4uhrH8MZJEwyP3akb8Z' to 'EcNRDDpKnjRdAuU9urxgYjyQV'\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Deps/Anchor[swift::config::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Swift::Deps/Anchor[swift::service::begin]: Triggered 'refresh' from 3 events\u001b[0m\n\u001b[mNotice: /File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql/Exec[galera-ready]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Mysql_database[ceilometer]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Openstacklib::Db::Mysql::Host_access[ceilometer_10.19.104.12]/Mysql_user[ceilometer@10.19.104.12]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_10.19.104.12]/Mysql_user[glance@10.19.104.12]/password_hash: password_hash changed '*AE67DF92813B126F5BC7CB3DC7167DACA5093777' to '*26D819D0EE093C1624D78BE224BB135F0140E6B6'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/password_hash: password_hash changed '*D3B6D3E90E685113D5C8BC7834661F5647450751' to '*DEEEA22DC29DEFBF8D577668233416E261FFD9FC'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_10.19.104.12]/Mysql_user[nova@10.19.104.12]/password_hash: password_hash changed '*95C50D362122D7A051E4421D749CBA0FEDDBBFA5' to '*50AA4765FFD193C42D450D7EA677F4E172062B7B'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/password_hash: password_hash changed '*AE67DF92813B126F5BC7CB3DC7167DACA5093777' to '*26D819D0EE093C1624D78BE224BB135F0140E6B6'\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_10.19.104.10]/Mysql_user[glance@10.19.104.10]/password_hash: password_hash changed '*AE67DF92813B126F5BC7CB3DC7167DACA5093777' to '*26D819D0EE093C1624D78BE224BB135F0140E6B6'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Openstacklib::Db::Mysql::Host_access[ceilometer_10.19.104.10]/Mysql_user[ceilometer@10.19.104.10]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/password_hash: password_hash changed '*5FCAE5FC758D7687D6A838B449F9B03559F52D6B' to '*FF36A4A4ADB7458B3BD9EF6B188290E1CAC5FAC3'\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql/Mysql_user[clustercheck@localhost]/password_hash: password_hash changed '*FD4926E180E88EFDF09DBCDD5182067323F75E4A' to '*7ED586F0A6F0E98437DC8201DD20281849288F6C'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Openstacklib::Db::Mysql::Host_access[ceilometer_10.19.104.12]/Mysql_grant[ceilometer@10.19.104.12/ceilometer.*]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/password_hash: password_hash changed '*95C50D362122D7A051E4421D749CBA0FEDDBBFA5' to '*50AA4765FFD193C42D450D7EA677F4E172062B7B'\u001b[0m\n\u001b[mNotice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_10.19.104.10]/Mysql_user[sahara@10.19.104.10]/password_hash: password_hash changed '*F16C18EBC8641966E3B38D30BEB2196C7F146E10' to '*C98BEAFD3E1DB798887D562BA5632D5668CD465D'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_10.19.104.10]/Mysql_user[nova@10.19.104.10]/password_hash: password_hash changed '*95C50D362122D7A051E4421D749CBA0FEDDBBFA5' to '*50AA4765FFD193C42D450D7EA677F4E172062B7B'\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Base[control_vip-then-haproxy]/Exec[Creating order constraint control_vip-then-haproxy]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_10.19.104.12]/Mysql_user[cinder@10.19.104.12]/password_hash: password_hash changed '*C0F9C49C5C4F272A8BF0E4D54F0FB5640A33594B' to '*9C86C187B6E413E989294FC7B54727F29976981C'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_10.19.104.10]/Mysql_user[nova_api@10.19.104.10]/password_hash: password_hash changed '*95C50D362122D7A051E4421D749CBA0FEDDBBFA5' to '*50AA4765FFD193C42D450D7EA677F4E172062B7B'\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_10.19.104.10]/Mysql_user[neutron@10.19.104.10]/password_hash: password_hash changed '*8C7BF4E621C9B3BD4247EA39CF66A3F918E0EA8A' to '*2102A370D13F27F2611C89958198DF74D6428D18'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_10.19.104.10]/Mysql_user[keystone@10.19.104.10]/password_hash: password_hash changed '*B1FACD60AB74FECDDBB736103069DD049F410959' to '*CE7BF40499FC11E15573FB68BFBFB093149F2A18'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/password_hash: password_hash changed '*B1FACD60AB74FECDDBB736103069DD049F410959' to '*CE7BF40499FC11E15573FB68BFBFB093149F2A18'\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_10.19.104.12]/Mysql_user[keystone@10.19.104.12]/password_hash: password_hash changed '*B1FACD60AB74FECDDBB736103069DD049F410959' to '*CE7BF40499FC11E15573FB68BFBFB093149F2A18'\u001b[0m\n\u001b[mNotice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_10.19.104.10]/Mysql_user[gnocchi@10.19.104.10]/password_hash: password_hash changed '*5FCAE5FC758D7687D6A838B449F9B03559F52D6B' to '*FF36A4A4ADB7458B3BD9EF6B188290E1CAC5FAC3'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_10.19.104.10]/Mysql_user[heat@10.19.104.10]/password_hash: password_hash changed '*D3B6D3E90E685113D5C8BC7834661F5647450751' to '*DEEEA22DC29DEFBF8D577668233416E261FFD9FC'\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_10.19.104.10]/Mysql_user[cinder@10.19.104.10]/password_hash: password_hash changed '*C0F9C49C5C4F272A8BF0E4D54F0FB5640A33594B' to '*9C86C187B6E413E989294FC7B54727F29976981C'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Openstacklib::Db::Mysql::Host_access[ceilometer_10.19.104.10]/Mysql_grant[ceilometer@10.19.104.10/ceilometer.*]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/password_hash: password_hash changed '*95C50D362122D7A051E4421D749CBA0FEDDBBFA5' to '*50AA4765FFD193C42D450D7EA677F4E172062B7B'\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/password_hash: password_hash changed '*C0F9C49C5C4F272A8BF0E4D54F0FB5640A33594B' to '*9C86C187B6E413E989294FC7B54727F29976981C'\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_10.19.104.12]/Mysql_user[nova_api@10.19.104.12]/password_hash: password_hash changed '*95C50D362122D7A051E4421D749CBA0FEDDBBFA5' to '*50AA4765FFD193C42D450D7EA677F4E172062B7B'\u001b[0m\n\u001b[mNotice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_10.19.104.12]/Mysql_user[gnocchi@10.19.104.12]/password_hash: password_hash changed '*5FCAE5FC758D7687D6A838B449F9B03559F52D6B' to '*FF36A4A4ADB7458B3BD9EF6B188290E1CAC5FAC3'\u001b[0m\n\u001b[mNotice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/password_hash: password_hash changed '*F16C18EBC8641966E3B38D30BEB2196C7F146E10' to '*C98BEAFD3E1DB798887D562BA5632D5668CD465D'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Openstacklib::Db::Mysql::Host_access[ceilometer_%]/Mysql_user[ceilometer@%]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceilometer::Db::Mysql/Openstacklib::Db::Mysql[ceilometer]/Openstacklib::Db::Mysql::Host_access[ceilometer_%]/Mysql_grant[ceilometer@%/ceilometer.*]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/password_hash: password_hash changed '*8C7BF4E621C9B3BD4247EA39CF66A3F918E0EA8A' to '*2102A370D13F27F2611C89958198DF74D6428D18'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_10.19.104.12]/Mysql_user[heat@10.19.104.12]/password_hash: password_hash changed '*D3B6D3E90E685113D5C8BC7834661F5647450751' to '*DEEEA22DC29DEFBF8D577668233416E261FFD9FC'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.10]/Mysql_user[aodh@10.19.104.10]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.12]/Mysql_user[aodh@10.19.104.12]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.12]/Mysql_grant[aodh@10.19.104.12/aodh.*]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Base[public_vip-then-haproxy]/Exec[Creating order constraint public_vip-then-haproxy]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_10.19.104.12]/Mysql_grant[neutron@10.19.104.12/ovs_neutron.*]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Base[storage_mgmt_vip-then-haproxy]/Exec[Creating order constraint storage_mgmt_vip-then-haproxy]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.10]/Mysql_grant[aodh@10.19.104.10/aodh.*]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_10.19.104.12]/Mysql_grant[sahara@10.19.104.12/sahara.*]: Dependency Mysql_user[sahara@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Base[internal_api_vip-then-haproxy]/Exec[Creating order constraint internal_api_vip-then-haproxy]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Triggered 'refresh' from 2 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Base[redis_vip-then-haproxy]/Exec[Creating order constraint redis_vip-then-haproxy]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Base[storage_vip-then-haproxy]/Exec[Creating order constraint storage_vip-then-haproxy]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Firewall[998 log all]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Firewall[998 log all]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Firewall[998 log all]: Dependency Mysql_user[sahara@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Firewall[999 drop all]: Dependency Mysql_database[aodh] has failures: true\u001b[0m\n\u001b[mNotice: /Firewall[999 drop all]: Dependency Mysql_user[neutron@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: /Firewall[999 drop all]: Dependency Mysql_user[sahara@10.19.104.12] has failures: true\u001b[0m\n\u001b[mNotice: Finished catalog run in 51.60 seconds\u001b[0m\n", 
    "deploy_stderr": "exception: connect failed\n\u001b[1;31mWarning: Scope(Class[Mongodb::Server]): Replset specified, but no replset_members or replset_config provided.\u001b[0m\n\u001b[1;31mWarning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.\u001b[0m\n\u001b[1;31mError: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`' returned 1: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 104\u001b[0m\n\u001b[1;31mError: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/ensure: change from absent to present failed: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`' returned 1: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 104\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.10]/Mysql_user[aodh@10.19.104.10]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.12]/Mysql_user[aodh@10.19.104.12]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.12]/Mysql_grant[aodh@10.19.104.12/aodh.*]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mError: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e SET PASSWORD FOR 'neutron'@'10.19.104.12' = '*2102A370D13F27F2611C89958198DF74D6428D18'' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)\u001b[0m\n\u001b[1;31mError: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_10.19.104.12]/Mysql_user[neutron@10.19.104.12]/password_hash: change from *8C7BF4E621C9B3BD4247EA39CF66A3F918E0EA8A to *2102A370D13F27F2611C89958198DF74D6428D18 failed: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e SET PASSWORD FOR 'neutron'@'10.19.104.12' = '*2102A370D13F27F2611C89958198DF74D6428D18'' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_10.19.104.12]/Mysql_grant[neutron@10.19.104.12/ovs_neutron.*]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_10.19.104.10]/Mysql_grant[aodh@10.19.104.10/aodh.*]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mError: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e SET PASSWORD FOR 'sahara'@'10.19.104.12' = '*C98BEAFD3E1DB798887D562BA5632D5668CD465D'' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)\u001b[0m\n\u001b[1;31mError: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_10.19.104.12]/Mysql_user[sahara@10.19.104.12]/password_hash: change from *F16C18EBC8641966E3B38D30BEB2196C7F146E10 to *C98BEAFD3E1DB798887D562BA5632D5668CD465D failed: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e SET PASSWORD FOR 'sahara'@'10.19.104.12' = '*C98BEAFD3E1DB798887D562BA5632D5668CD465D'' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_10.19.104.12]/Mysql_grant[sahara@10.19.104.12/sahara.*]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Firewall[998 log all]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Firewall[999 drop all]: Skipping because of failed dependencies\u001b[0m\n", 
    "deploy_status_code": 6
  }, 
  "creation_time": "2016-10-29T09:28:20Z", 
  "updated_time": "2016-10-29T09:31:13Z", 
  "input_values": {
    "step": 2, 
    "update_identifier": "1477732275"
  }, 
  "action": "CREATE", 
  "status_reason": "deploy_status_code : Deployment exited with non-zero status code: 6", 
  "id": "838ba395-c10b-4da1-b52d-6ff2c0884e5d"

Comment 1 Omri Hochman 2016-10-29 13:57:45 UTC
The initial Error might be : 
-------------------------
Error: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`' returned 1: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 104\u001b[0m\n\u001b[1;31mError: 



Adding PCS status of the controller: 
-------------------------------------
[heat-admin@overcloud-controller-0 ~]$ sudo su -
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.15-11.el7-e174ec8) - partition with quorum
Last updated: Sat Oct 29 13:57:15 2016          Last change: Sat Oct 29 09:31:06 2016 by root via cibadmin on overcloud-controller-0

3 nodes and 19 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-10.19.184.210       (ocf::heartbeat:IPaddr2):       Started overcloud-controller-0
 ip-192.168.200.10      (ocf::heartbeat:IPaddr2):       Started overcloud-controller-1
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-192.168.0.6 (ocf::heartbeat:IPaddr2):       Started overcloud-controller-0
 Master/Slave Set: galera-master [galera]
     galera     (ocf::heartbeat:galera):        FAILED Master overcloud-controller-2 (blocked)
     Slaves: [ overcloud-controller-0 overcloud-controller-1 ]
 ip-10.19.104.11        (ocf::heartbeat:IPaddr2):       Started overcloud-controller-0
 ip-10.19.105.10        (ocf::heartbeat:IPaddr2):       Started overcloud-controller-1
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-0 ]
     Slaves: [ overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 ip-10.19.104.10        (ocf::heartbeat:IPaddr2):       Started overcloud-controller-1
 openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started overcloud-controller-0

Failed Actions:
* galera_promote_0 on overcloud-controller-2 'unknown error' (1): call=108, status=complete, exitreason='Failed initial monitor action',
    last-rc-change='Sat Oct 29 09:31:01 2016', queued=0ms, exec=9055ms
* galera_monitor_10000 on overcloud-controller-1 'unknown error' (1): call=95, status=complete, exitreason='local node <overcloud-controller-1> is started, but not in primary mode. Unknown state.',
    last-rc-change='Sat Oct 29 09:30:32 2016', queued=0ms, exec=0ms
* redis_start_0 on overcloud-controller-1 'unknown error' (1): call=103, status=Timed Out, exitreason='none',
    last-rc-change='Sat Oct 29 09:26:13 2016', queued=0ms, exec=120003ms
* galera_monitor_10000 on overcloud-controller-0 'unknown error' (1): call=62, status=complete, exitreason='local node <overcloud-controller-0> is started, but not in primary mode. Unknown state.',
    last-rc-change='Sat Oct 29 09:30:34 2016', queued=0ms, exec=0ms
* redis_demote_0 on overcloud-controller-0 'unknown error' (1): call=82, status=Timed Out, exitreason='none',
    last-rc-change='Sat Oct 29 09:27:22 2016', queued=0ms, exec=120006ms


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Comment 2 Omri Hochman 2016-10-29 14:05:52 UTC
Created attachment 1215326 [details]
messages

adding the messages file from the controller

Comment 3 Omri Hochman 2016-10-29 14:06:30 UTC
Created attachment 1215327 [details]
heat-engine.log

adding the heat-engine.log from the undercloud

Comment 4 Sofer Athlan-Guyot 2016-11-01 13:33:04 UTC
Hi,

could reproduce the same kind of error: ie galera fails to restart:


 Master/Slave Set: galera-master [galera]
     galera	(ocf::heartbeat:galera):	FAILED Master overcloud-controller-1 (blocked)
     galera	(ocf::heartbeat:galera):	FAILED Master overcloud-controller-2 (blocked)
     galera	(ocf::heartbeat:galera):	FAILED Master overcloud-controller-0 (blocked)


Failed Actions:
* galera_promote_0 on overcloud-controller-1 'unknown error' (1): call=88, status=complete, exitreason='Failure, Attempted to promote Master instance of galera before bootstrap node has been detected.',
    last-rc-change='Tue Nov  1 12:19:22 2016', queued=0ms, exec=298ms
* galera_promote_0 on overcloud-controller-2 'unknown error' (1): call=91, status=complete, exitreason='Failure, Attempted to promote Master instance of galera before bootstrap node has been detected.',
    last-rc-change='Tue Nov  1 12:19:34 2016', queued=1ms, exec=143ms
* galera_promote_0 on overcloud-controller-0 'unknown error' (1): call=104, status=complete, exitreason='Failure, Attempted to promote Master instance of galera before bootstrap node has been detected.',
    last-rc-change='Tue Nov  1 12:20:07 2016', queued=0ms, exec=183ms
* openstack-cinder-volume_monitor_60000 on overcloud-controller-0 'not running' (7): call=171, status=complete, exitreason='none',
    last-rc-change='Tue Nov  1 12:52:39 2016', queued=0ms, exec=0ms


It looks like an orchestration error for galera restart.

Comment 5 Sofer Athlan-Guyot 2016-11-02 15:14:08 UTC
Some more information, no complete answer yet.

So, during the Stage1 of the convergence the galera.cnf is modified
like this by puppet:

   --- /var/lib/puppet/clientbucket/8/8/e/0/e/3/2/a/88e0e32ad4cb54738f47722677f7c548/contents      2016-11-01 12:19:07.994640899 +0000
   +++ /etc/my.cnf.d/galera.cnf    2016-11-02 12:40:36.088640899 +0000
   @@ -9,7 +9,7 @@
    
    [mysqld]
    basedir = /usr
   -bind-address = overcloud-controller-0
   +bind-address = overcloud-controller-0.internalapi.localdomain
    binlog_format = ROW
    datadir = /var/lib/mysql
    default-storage-engine = innodb
   @@ -50,7 +50,7 @@
    wsrep_max_ws_size = 1073741824
    wsrep_on = ON
    wsrep_provider = /usr/lib64/galera/libgalera_smm.so
   -wsrep_provider_options = gmcast.listen_addr=tcp://[172.16.2.7]:4567;
   +wsrep_provider_options = gmcast.listen_addr=tcp://172.16.2.7:4567;
    wsrep_retry_autocommit = 1
    wsrep_slave_threads = 1
    wsrep_sst_method = rsync
   @@ -82,6 +82,3 @@
    
    
    
   -[mysqld]
   -wsrep_on = ON
   -wsrep_cluster_address = gcomm://localhost


The puppet trace is:

   Debug: /Stage[main]/Mysql::Server::Config/File[/etc/my.cnf.d]: The container Class[Mysql::Server::Config] will propagate my refresh event
   Info: Computing checksum on file /etc/my.cnf.d/galera.cnf
   Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf.d/galera.cnf to puppet with sum 88e0e32ad4cb54738f47722677f7c548
   Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}88e0e32ad4cb54738f47722677f7c548' to '{md5}9041141bee9882324d31b6fde51ea197'
   Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: The container Class[Mysql::Server::Config] will propagate my refresh event
   Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Scheduling refresh of Tripleo::Pacemaker::Resource_restart_flag[galera-master]
   Debug: Class[Mysql::Server::Config]: The container Stage[main] will propagate my refresh event
   
Firewall is modified as well:

   Debug: Firewall[104 mysql galera](provider=iptables): Current resource: Puppet::Type::Firewall
   Debug: Executing '/usr/sbin/iptables -I INPUT 5 -t filter -p tcp -m multiport --dports 873,3306,4444,4567,4568,9200 -m comment --comment 104 mysql galera -m state --state NEW -j ACCEPT'
   Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera]/Firewall[104 mysql galera]/ensure: created
   Debug: Firewall[104 mysql galera](provider=iptables): [flush]
   
And on the stderr of the same puppet run we can see:

   Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.
   Error: Could not prefetch mysql_user provider 'mysql': Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
   Error: Could not prefetch mysql_database provider 'mysql': Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show databases' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

and associated stdout:

   Debug: Executing 'test -f /.mysql_secret'
   Debug: Prefetching mysql resources for mysql_user
   Debug: Executing '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user'
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@overcloud-controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: Prefetching mysql resources for mysql_database
   Debug: Executing '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show databases'
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@overcloud-controller-0]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@overcloud-controller-0]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@overcloud-controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]: Nothing to manage: no ensure and the resource doesn't exist
   Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]: Nothing to manage: no ensure and the resource doesn't exist

This looks like the step1 is indeed failing be the puppet prefetch error is not raising a error and we goes on on step 2 with a already failed database

Comment 6 Sofer Athlan-Guyot 2016-11-02 16:15:30 UTC
So, after updating the db with the clustercheck password as found in /etc/sysconfig/clustercheck, and doing a pcs resource cleanup, the cluster was rebuild flawlessly.

So this might be that the password is updated in the file instead of keeping the current. one.

Comment 7 Sofer Athlan-Guyot 2016-11-02 16:55:46 UTC
After a debug session with Michele Baldessari and Damien Ciabrini we
found that all the passwords are changed.

The values given by "mistral environment-get overcloud " for the
password are completely different from that in
tripleo-overcloud-passwords.

It looks that change
https://github.com/openstack/python-tripleoclient/commit/b51753e8e5e154f93c09c22f5ff6b75ecb699d49
is causing the issue.

What we would need it for the new old password database be imported to
mistral during upgrade either in the client code or using another script.

This should be a valid upstream bug as well, most certainly.
Requesting info to the original commiters, to check that we get the
story right.

Comment 8 Sofer Athlan-Guyot 2016-11-02 17:01:09 UTC
Adding upstream review.

Comment 9 Sofer Athlan-Guyot 2016-11-02 17:02:16 UTC
Adding upstream bug.

Comment 10 Sofer Athlan-Guyot 2016-11-02 17:11:06 UTC
This is the same root cause than in https://bugzilla.redhat.com/show_bug.cgi?id=1388930

Comment 11 Sofer Athlan-Guyot 2016-11-03 10:58:13 UTC
Note that as the error in comment 5 (https://bugzilla.redhat.com/show_bug.cgi?id=1389926#c5) pop up again I've created a bugzilla for it to track it https://bugzilla.redhat.com/show_bug.cgi?id=1391447

Comment 12 Marios Andreou 2016-11-07 16:46:49 UTC
As noted in comment #10 and discussed during lifecycle scrum today, this has the same root cause as BZ 1388930 and so fixed by https://review.openstack.org/#/c/394195/ which is merged to stable/newton. Adding that to the external trackers and removing the existing (abandoned) review. Moving to POST.

Comment 14 Omri Hochman 2016-11-15 18:48:13 UTC
unable to reproduce with :  openstack-tripleo-common-5.3.0-6.el7ost.noarch

Comment 16 errata-xmlrpc 2016-12-14 16:26:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2948.html


Note You need to log in before you can comment on or make changes to this bug.