Bug 1180322

Summary: rhel-osp-installer: Controller report shows multiple errors for "Provider mysql is not functional on this host".
Product: Red Hat OpenStack Reporter: Omri Hochman <ohochman>
Component: rhel-osp-installerAssignee: Jiri Stransky <jstransk>
Status: CLOSED ERRATA QA Contact: Omri Hochman <ohochman>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.0 (Juno)CC: aberezin, dmacpher, jshortt, jstransk, juwu, kevin.richards, kevin.x.wang, lyarwood, mburns, morazi, oblaut, ohochman, racedoro, rhos-maint, salmank, sasha, sclewis, sengork, sseago, vincent.y.chen, yeylon
Target Milestone: z1Keywords: Reopened, UserExperience, ZStream
Target Release: Installer   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rhel-osp-installer-0.5.5-3.el7ost ruby193-rubygem-staypuft-0.5.19-1.el7ost Doc Type: Bug Fix
Doc Text:
The initial Puppet run did not have MariaDB installed. This caused a non-fatal error during the initial Puppet run on Controller nodes. This fix now installs MariaDB on Controllers during the kickstart process to prevent this error.
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-03-05 18:18:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 743661, 1177026, 1183053    
Attachments:
Description Flags
foreman_installer.log
none
controller_logs
none
Controller_messages_file none

Description Omri Hochman 2015-01-08 20:30:56 UTC
Created attachment 977935 [details]
foreman_installer.log

rhel-osp-installer: Controller report shows multiple errors for "Provider mysql is not functional on this host".


Description: 
-------------
Attempted to run Nova-network deployment with 3 controllers and 2 computes. 
after the deployment finished - from looking at the reports of the controller I could see multiple Errors for missing provider MySQL . 

"Provider mysql is not functional on this host".

From checking the controllers it seems that mariadb was installed : 
[root@maca25400702877 ~]# rpm -qa | grep mariadb
mariadb-galera-common-5.5.40-3.el7ost.x86_64
mariadb-5.5.40-1.el7_0.x86_64
mariadb-galera-server-5.5.40-3.el7ost.x86_64
mariadb-libs-5.5.40-1.el7_0.x86_64


Environment:
-------------
rhel-osp-installer-0.5.4-1.el7ost.noarch
openstack-puppet-modules-2014.2.7-2.el7ost.noarch
puppet-3.6.2-2.el7.noarch
puppet-server-3.6.2-2.el7.noarch


Reports Errors:
---------------
err	Puppet	Could not find a suitable provider for mongodb_replset
err	Puppet	Could not find a suitable provider for mysql_database
err	Puppet	Could not find a suitable provider for mysql_user
err	Puppet	Could not prefetch keystone_endpoint provider 'keystone': File: /etc/keystone/keystone.conf does not contain a section DEFAULT with the admin_token specified. Keystone types will not work if keystone is not correctly configured
err	Puppet	Could not prefetch mysql_grant provider 'mysql': Command mysql is missing
err	/Stage[main]/Quickstack::Galera::Db/Mysql_user[neutron@%]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_user[cinder@%]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_user[nova@%]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_user[glance@%]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_user[heat@%]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_user[keystone@%]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_database[keystone]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_database[cinder]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_database[heat]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_database[glance]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_database[nova]	Provider mysql is not functional on this host
err	/Stage[main]/Quickstack::Galera::Db/Mysql_database[neutron]	Provider mysql is not functional on this host


From foreman-installer.log (attached) : 
----------------------------------------
ESC[mNotice: /File[/var/lib/puppet/lib/puppet/provider/rabbitmq_user/rabbitmqctl.rb]/ensure: defined content as '{md5}5bafb7579d8ac5e26ead4ccc0e50a625'ESC[0m
ESC[1;31mWarning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in `set_default')ESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_database[neutron]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_database[nova]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_database[glance]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_database[heat]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_database[cinder]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_database[keystone]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_user[keystone@%]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_user[heat@%]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_user[glance@%]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_user[nova@%]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_user[cinder@%]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: /Stage[main]/Quickstack::Galera::Db/Mysql_user[neutron@%]: Provider mysql is not functional on this hostESC[0m
ESC[1;31mError: Could not prefetch mysql_grant provider 'mysql': Command mysql is missingESC[0m
ESC[1;31mError: Could not prefetch keystone_endpoint provider 'keystone': File: /etc/keystone/keystone.conf does not contain a section DEFAULT with the admin_token specified. Keystone types will not work if keystone is not correctly configuredESC[0m
ESC[1;31mError: Could not find a suitable provider for mysql_userESC[0m
ESC[1;31mError: Could not find a suitable provider for mysql_databaseESC[0m
ESC[mNotice: Finished catalog run in 1.67 secondsESC[0m

Comment 2 Omri Hochman 2015-01-08 20:56:19 UTC
Regarding : 
"Could not prefetch keystone_endpoint provider 'keystone': File: /etc/keystone/keystone.conf does not contain a section DEFAULT with the admin_token specified." 

From looking at keystone.conf it seems that 'admin_token' exists under the DEFAULT section .

Comment 3 Omri Hochman 2015-01-08 21:02:30 UTC
Created attachment 977949 [details]
controller_logs

Comment 4 Jason Guiditta 2015-01-09 14:37:16 UTC
Omri, did this actually fail the deployment?  Are the systems still available for me to look at?

Comment 5 Mike Burns 2015-01-09 14:39:50 UTC
or sasha^^

Comment 6 Alexander Chuzhoy 2015-01-09 15:14:06 UTC
The deployment's State is "running".
Provided a system to investigate.

Comment 7 Omri Hochman 2015-01-09 15:21:14 UTC
(In reply to Jason Guiditta from comment #4)
> Omri, did this actually fail the deployment?  Are the systems still
> available for me to look at?

It didn't fail the deployment .

Comment 8 Mike Burns 2015-01-12 13:04:49 UTC
*** Bug 1180961 has been marked as a duplicate of this bug. ***

Comment 10 kevin 2015-01-13 01:42:27 UTC
but on my test bed, it failed the deployment, could someone help to trouleshoot that? any mlore information needed?

Comment 11 Omri Hochman 2015-01-13 15:32:24 UTC
(In reply to kevin from comment #10)
> but on my test bed, it failed the deployment, could someone help to
> trouleshoot that? any mlore information needed?

Currently under investigation, In some cases the deployment indeed fails :  raising severity and blocker flag.

Comment 12 Omri Hochman 2015-01-13 15:57:06 UTC
Not sure if related, there's a failed action on pcs status : 

[root@maca25400702876 ~]# pcs status
Cluster name: openstack
Last updated: Tue Jan 13 10:51:43 2015
Last change: Mon Jan 12 18:03:22 2015 via cibadmin on pcmk-maca25400702875
Stack: corosync
Current DC: pcmk-maca25400702875 (1) - partition with quorum
Version: 1.1.10-32.el7_0.1-368c726
3 Nodes configured
108 Resources configured


Online: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]

Full list of resources:

 ip-192.168.0.4 (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.3 (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 ip-192.168.0.29        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 ip-192.168.0.2 (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.23        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 ip-192.168.0.24        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 ip-192.168.0.25        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.36        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 Clone Set: memcached-clone [memcached]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: rabbitmq-server-clone [rabbitmq-server]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 ]
     Stopped: [ pcmk-maca25400702877 ]
 Clone Set: haproxy-clone [haproxy]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-192.168.0.13        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 Master/Slave Set: galera-master [galera]
     Masters: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-192.168.0.26        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 ip-192.168.0.28        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.27        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-192.168.0.14        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 Clone Set: fs-varlibglanceimages-clone [fs-varlibglanceimages]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-192.168.0.16        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.15        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-192.168.0.35        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 ip-192.168.0.33        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.34        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 ip-192.168.0.6 (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 ip-192.168.0.5 (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.12        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started pcmk-maca25400702877
 ip-192.168.0.17        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.18        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 ip-192.168.0.19        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 ip-192.168.0.20        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702875
 ip-192.168.0.22        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702876
 ip-192.168.0.21        (ocf::heartbeat:IPaddr2):       Started pcmk-maca25400702877
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Resource Group: heat
     openstack-heat-engine      (systemd:openstack-heat-engine):        Started pcmk-maca25400702875
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: httpd-clone [httpd]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: mongod-clone [mongod]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: ceilometer-delay-clone [ceilometer-delay]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 openstack-ceilometer-central   (systemd:openstack-ceilometer-central): Started pcmk-maca25400702876
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]
 Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator]
     Started: [ pcmk-maca25400702875 pcmk-maca25400702876 pcmk-maca25400702877 ]

Failed actions:
    rabbitmq-server_start_0 on pcmk-maca25400702877 'OCF_PENDING' (196): call=50, status=complete, last-rc-change='Mon Jan 12 17:34:56 2015', queued=2ms, exec=2001ms


PCSD Status:
  pcmk-maca25400702875: Online
  pcmk-maca25400702876: Online
  pcmk-maca25400702877: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Comment 13 Omri Hochman 2015-01-13 17:46:46 UTC
Workaround to finish the deployment will be:

Run on the controller that had the problem.  
 (1) restart rabbit-server 
 (2) pcs resource cleanup rabbitmq-server  
 (3) puppet agent -tv  
 (4) resume deployment from rhel-osp-installer GUI. 


To find which is the problematic hosts : 
Check the GUI : deployment status -> dynflow-console -> Click on the last error to see error details.

Comment 15 kevin 2015-01-15 02:02:42 UTC
the reployment can be done with same errors after we reinstalled osp-installer .
1. reinstalled osp-installer
2. redeployment controller node
3. depolyment completed with same errors. i posted error below

could you please share me how can i troubleshoot it? 
please let me know if any more information you need.

#tail –f foreman-installer.log
Notice: /File[/var/lib/puppet/lib/puppet/provider/rabbitmq_user/rabbitmqctl.rb]/ensure: defined content as '{md5}5bafb7579d8ac5e26ead4ccc0e50a625'
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in `set_default')
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_database[neutron]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_database[nova]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_database[glance]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_database[heat]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_database[cinder]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_database[keystone]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_user[keystone@%]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_user[heat@%]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_user[glance@%]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_user[nova@%]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_user[cinder@%]: Provider mysql is not functional on this host
Error: /Stage[main]/Quickstack::Galera::Db/Mysql_user[neutron@%]: Provider mysql is not functional on this host
Error: Could not prefetch mysql_grant provider 'mysql': Command mysql is missing
Error: Could not prefetch keystone_endpoint provider 'keystone': File: /etc/keystone/keystone.conf does not contain a section DEFAULT with the admin_token specified. Keystone types will not work if keystone is not correctly configured
Error: Could not find a suitable provider for mysql_user
Error: Could not find a suitable provider for vs_bridge
Error: Could not find a suitable provider for mysql_database
Error: Could not find a suitable provider for mongodb_replset
Notice: Finished catalog run in 5.42 seconds

[root@mac0050568e4efc keystone(openstack_admin)]# tail -f keystone.log
2015-01-14 20:45:03.342 30131 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 0
2015-01-14 20:45:17.314 1816 INFO eventlet.wsgi.server [-] 10.103.118.22 - - [14/Jan/2015 20:45:17] "GET /v2.0 HTTP/1.1" 200 554 0.003710
2015-01-14 20:45:17.341 1816 WARNING keystone.common.wsgi [-] Authorization failed. The request you have made requires authentication. from 10.103.118.22
2015-01-14 20:45:17.344 1816 INFO eventlet.wsgi.server [-] 10.103.118.22 - - [14/Jan/2015 20:45:17] "POST /v2.0/tokens HTTP/1.1" 401 315 0.023850
2015-01-14 20:46:03.148 870 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.token.backends.sql.Token is deprecated as of Juno in favor of keystone.token.persistence.backends.sql.Token and may be removed in Kilo.
2015-01-14 20:46:03.191 870 INFO keystone.token.persistence.backends.sql [-] Token expiration batch size: 1000
2015-01-14 20:46:03.199 870 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 0
2015-01-14 20:46:17.373 1816 INFO eventlet.wsgi.server [-] 10.103.118.22 - - [14/Jan/2015 20:46:17] "GET /v2.0 HTTP/1.1" 200 554 0.003787
2015-01-14 20:46:17.392 1816 WARNING keystone.common.wsgi [-] Authorization failed. The request you have made requires authentication. from 10.103.118.22
2015-01-14 20:46:17.394 1816 INFO eventlet.wsgi.server [-] 10.103.118.22 - - [14/Jan/2015 20:46:17] "POST /v2.0/tokens HTTP/1.1" 401 315 0.01566

Comment 16 kevin 2015-01-15 09:14:33 UTC
below is what i tried and found today:
1. deployed another node as compute node. the process stopped on step 29
2. from nova.log below, I can see "Timed out waiting for nova-conductor".
3. started all of nova service on controller node. resumed depolyment process. still did not get luck.


1.only one user-role for user admin was added. that is not correct. that maybe cause some services cannot start by pcs 
[root@mac0050568e4efc ~(openstack_admin)]# keystone user-role-list
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| ff268915c2b2432a811713b90627e2f1 | admin | 5f2a93bdbc074222b921f81d2ac96a36 | 3e76806f6c6f41cfa53ba766767d66f4 |
+----------------------------------+-------+----------------------------------+----------------------------------+
normally, on controller node, at least the user glance, nova,neutron,cinder should have role for tenant services. could someone tell me why depolyment did not add user-role and how to fix that?

[root@mac0050568e4efc ~(openstack_admin)]# keystone user-list
+----------------------------------+---------+---------+-------------------+
|                id                |   name  | enabled |       email       |
+----------------------------------+---------+---------+-------------------+
| 5f2a93bdbc074222b921f81d2ac96a36 |  admin  |   True  |  admin  |
| e82767a5f1ce4b6aae5d040c51671889 |  cinder |   True  |  cinder@localhost |
| ae494219accb431abe10864b596b6b9a |  glance |   True  |  glance@localhost |
| 52d98337b82049818f872cb72b0b02f4 |   heat  |   True  |   heat@localhost  |
| 3279944985eb458981adaec1b970b0c3 | neutron |   True  | neutron@localhost |
| 064c536c7c8e44bd87ef398975fb99f0 |   nova  |   True  |   nova@localhost  |
+----------------------------------+---------+---------+-------------------+
[root@mac0050568e4efc ~(openstack_admin)]# keystone tenant-list
+----------------------------------+----------+---------+
|                id                |   name   | enabled |
+----------------------------------+----------+---------+
| 3e76806f6c6f41cfa53ba766767d66f4 |  admin   |   True  |
| 66ad12e455754e479dd93426a733daa1 | services |   True  |
+----------------------------------+----------+---------+
[root@mac0050568e4efc ~(openstack_admin)]# keystone role-list
+----------------------------------+------------------+
|                id                |       name       |
+----------------------------------+------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab |     _member_     |
| ff268915c2b2432a811713b90627e2f1 |      admin       |
| 2cef85f8caff4a0aaf09d69a6342c090 | heat_stack_owner |
| e519ea0b7b0342be96ae5641870e878a | heat_stack_user  |
+----------------------------------+------------------+
[root@mac0050568e4efc ~(openstack_admin)]# keystone service-list
+----------------------------------+----------+---------------+---------------------------------+
|                id                |   name   |      type     |           description           |
+----------------------------------+----------+---------------+---------------------------------+
| 98e46bbcf7ad4cbf9d2aa32a60aa5678 |  cinder  |     volume    |          Cinder Service         |
| 85ff8d73a8464d19998d89605747daa5 | cinderv2 |    volumev2   |        Cinder Service v2        |
| 32a1ca6d9df048a4a2094c58cb8d5a0c |  glance  |     image     |     Openstack Image Service     |
| 8e7e3c8773f84f7f8fb5d68b917e74bb |   heat   | orchestration | Openstack Orchestration Service |
| 6ec729af714a41b0880b495afba8224f | keystone |    identity   |    OpenStack Identity Service   |
| 2db4c9e626a54754bcb55abb5f37eb3d | neutron  |    network    |    Neutron Networking Service   |
| 3adb4b50730b47dcb0c93fdcbca04dbd |   nova   |    compute    |    Openstack Compute Service    |
| 962f946e4a074a1db004afb07fbb18eb | nova_ec2 |      ec2      |           EC2 Service           |
| 6066f3b9a9dd4795a04a5415f33823f8 |  novav3  |   computev3   |   Openstack Compute Service v3  |
+----------------------------------+----------+---------------+---------------------------------+

2.some services are not started by pcs . even database can be connected successfully. but I can start nova services manually
[root@mac0050568e4efc ~]# pcs status
Cluster name: openstack
Last updated: Thu Jan 15 04:06:04 2015
Last change: Thu Jan 15 03:06:38 2015 via cibadmin on pcmk-mac0050568e4efc
Stack: corosync
Current DC: pcmk-mac0050568e4efc (1) - partition with quorum
Version: 1.1.10-32.el7_0.1-368c726
1 Nodes configured
66 Resources configured

Online: [ pcmk-mac0050568e4efc ]
Full list of resources:

 ip-10.103.117.214      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.211      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.244      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.213      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.241      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.229      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.228      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.230      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.248      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.242      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.243      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Clone Set: memcached-clone [memcached]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: rabbitmq-server-clone [rabbitmq-server]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: haproxy-clone [haproxy]
     Started: [ pcmk-mac0050568e4efc ]
 ip-10.103.117.218      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Master/Slave Set: galera-master [galera]
     Masters: [ pcmk-mac0050568e4efc ]
 ip-10.103.117.239      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.236      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.240      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ pcmk-mac0050568e4efc ]
 ip-10.103.117.221      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.220      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.219      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Clone Set: fs-varlibglanceimages-clone [fs-varlibglanceimages]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Stopped: [ pcmk-mac0050568e4efc ]
 ip-10.103.117.246      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.245      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.247      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Stopped: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Stopped: [ pcmk-mac0050568e4efc ]
 ip-10.103.117.217      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.215      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.216      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ pcmk-mac0050568e4efc ]
 openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started pcmk-mac0050568e4efc
 Clone Set: neutron-server-clone [neutron-server]
     Stopped: [ pcmk-mac0050568e4efc ]
 ip-10.103.117.223      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.224      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.226      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.227      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.225      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 ip-10.103.117.222      (ocf::heartbeat:IPaddr2):       Started pcmk-mac0050568e4efc
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ pcmk-mac0050568e4efc ]
 Resource Group: neutron-agents
     neutron-openvswitch-agent  (systemd:neutron-openvswitch-agent):    Started pcmk-mac0050568e4efc
     neutron-dhcp-agent (systemd:neutron-dhcp-agent):   Started pcmk-mac0050568e4efc
     neutron-l3-agent   (systemd:neutron-l3-agent):     Started pcmk-mac0050568e4efc
     neutron-metadata-agent     (systemd:neutron-metadata-agent):       Started pcmk-mac0050568e4efc
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ pcmk-mac0050568e4efc ]
 Resource Group: heat
     openstack-heat-engine      (systemd:openstack-heat-engine):        Started pcmk-mac0050568e4efc
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: httpd-clone [httpd]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: mongod-clone [mongod]
     Started: [ pcmk-mac0050568e4efc ]
 openstack-ceilometer-central   (systemd:openstack-ceilometer-central): Started pcmk-mac0050568e4efc
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator]
     Started: [ pcmk-mac0050568e4efc ]
 Clone Set: ceilometer-delay-clone [ceilometer-delay]
     Started: [ pcmk-mac0050568e4efc ]

Failed actions:
    rabbitmq-server_start_0 on pcmk-mac0050568e4efc 'OCF_PENDING' (196): call=183, status=complete, last-rc-change='Thu Jan 15 03:45:42 2015', queued=10ms, exec=2090ms
    fs-varlibglanceimages_monitor_0 on pcmk-mac0050568e4efc 'not configured' (6): call=103, status=complete, last-rc-change='Thu Jan 15 03:45:07 2015', queued=2199ms, exec=0ms
    openstack-nova-consoleauth_start_0 on pcmk-mac0050568e4efc 'OCF_PENDING' (196): call=186, status=complete, last-rc-change='Thu Jan 15 03:45:44 2015', queued=3ms, exec=2334ms
    neutron-server_start_0 on pcmk-mac0050568e4efc 'OCF_PENDING' (196): call=356, status=complete, last-rc-change='Thu Jan 15 03:47:22 2015', queued=186ms, exec=8262ms

PCSD Status:
  pcmk-mac0050568e4efc: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Comment 17 Kevin Richards 2015-01-15 21:34:57 UTC
Compute node still failing at step 29 of deployment. The Dynoflow console is logging the following errors:

Error:

Staypuft::Exception

ERF42-7423 [Staypuft::Exception]: No Puppet report found for host: 4

---
- /opt/rh/ruby193/root/usr/share/gems/gems/staypuft-0.5.9/app/lib/actions/staypuft/host/assert_report_success.rb:28:in
  `assert_latest_report_success'
- /opt/rh/ruby193/root/usr/share/gems/gems/staypuft-0.5.9/app/lib/actions/staypuft/host/assert_report_success.rb:17:in
  `run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:443:in
  `block (3 levels) in execute_run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:in
  `call'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:in
  `pass'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in
  `pass'
- /opt/rh/ruby193/root/usr/share/gems/gems/staypuft-0.5.9/app/lib/actions/staypuft/middleware/as_current_user.rb:14:in
  `block in run'
- /opt/rh/ruby193/root/usr/share/gems/gems/staypuft-0.5.9/app/lib/actions/staypuft/middleware/as_current_user.rb:30:in
  `as_current_user'
- /opt/rh/ruby193/root/usr/share/gems/gems/staypuft-0.5.9/app/lib/actions/staypuft/middleware/as_current_user.rb:14:in
  `run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:in
  `call'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:in
  `pass'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in
  `pass'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action/progress.rb:30:in
  `with_progress_calculation'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action/progress.rb:16:in
  `run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:in
  `call'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/world.rb:30:in
  `execute'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:442:in
  `block (2 levels) in execute_run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:441:in
  `catch'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:441:in
  `block in execute_run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:in
  `call'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:in
  `block in with_error_handling'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:in
  `catch'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:in
  `with_error_handling'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:436:in
  `execute_run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:230:in
  `execute'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:9:in
  `block (2 levels) in execute'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract.rb:152:in
  `call'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract.rb:152:in
  `with_meta_calculation'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:8:in
  `block in execute'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:22:in
  `open_action'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:7:in
  `execute'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/worker.rb:20:in
  `block in on_message'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:859:in
  `block in assigns'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:858:in
  `tap'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:858:in
  `assigns'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:138:in
  `match_value'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:116:in
  `block in match'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:115:in
  `each'
- /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:115:in
  `match'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/worker.rb:17:in
  `on_message'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:82:in
  `on_envelope'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:72:in
  `receive'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in
  `block (2 levels) in run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in
  `loop'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in
  `block in run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in
  `catch'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in
  `run'
- /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:13:in
  `block in initialize'

Comment 18 Alexander Chuzhoy 2015-01-16 00:14:06 UTC
The error reproduced with HANova deployment.
2 controllers out of 3 reported this error after the first puppet run.
Despite the error, subsequent puppet runs didn't report it and the deployment completed successfully without intervention.

Comment 19 Mike Burns 2015-01-16 03:31:43 UTC
(In reply to Kevin Richards from comment #17)
> Compute node still failing at step 29 of deployment. The Dynoflow console is
> logging the following errors:
> 
> Error:
> 
> Staypuft::Exception
> 
> ERF42-7423 [Staypuft::Exception]: No Puppet report found for host: 4
> 
> ---
> -
> /opt/rh/ruby193/root/usr/share/gems/gems/staypuft-0.5.9/app/lib/actions/
> staypuft/host/assert_report_success.rb:28:in
>   `assert_latest_report_success'
> -


The dynflow console unfortunately tells us very little of what is actually breaking in your deployment.  The bit above could basically be translated as "We didn't get a successful puppet response for the host".  Can you go to your controller host(s) and collect /var/log/messages?  That will give us the results of the puppet run which failed and help us debug the issue.

Comment 20 vincent_chen 2015-01-16 09:52:52 UTC
Created attachment 980800 [details]
Controller_messages_file

Controller_messages_file

Comment 21 Kevin Richards 2015-01-19 19:24:18 UTC
The controller logs have been attached and all information provided. We don't believe this is a race condition considering we consistent hit the same problem on multiple retries and configurations.

Comment 22 Mike Burns 2015-01-21 14:49:10 UTC
The fix for this issue is to add a yum install of mariadb in the kickstart for controllers only.

Comment 23 kevin 2015-01-22 01:51:04 UTC
we did not satify with requirement, close it

Comment 24 Mike Burns 2015-01-22 14:26:36 UTC
reopening -- this is still a valid issue that has been reproduced on other environments.

Comment 26 Brad P. Crochet 2015-01-23 14:35:06 UTC
PR available: https://github.com/theforeman/foreman-installer-staypuft/pull/131

Comment 28 Alexander Chuzhoy 2015-01-23 18:44:37 UTC
Verified: FailedQA
Environment:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.10-2.el7ost.noarch
ruby193-rubygem-staypuft-0.5.14-1.el7ost.noarch
rhel-osp-installer-client-0.5.5-3.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.5-3.el7ost.noarch


The previous error was transformed into:

Could not prefetch mysql_user provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
Could not prefetch mysql_database provider 'mysql': Execution of '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)


The error isn't reported in the subsequent reports. The deployment doesn't pause and complete successfully.

Comment 29 Mike Burns 2015-01-26 22:29:24 UTC
Jirka,  can you look at this?  It's coming from the rhel-osp-installer-client puppet run which shouldn't be triggering actual puppet run, afaik.

Comment 30 Jiri Stransky 2015-01-27 09:41:45 UTC
Yeah it doesn't apply any changes on the system, but it seems like it still does run through the catalog even if not doing any actual changes. I'll investigate if there's some way we can make it submit a report to Foreman to register the node with a success status.

Comment 33 Mike Burns 2015-02-03 17:04:56 UTC
Jirka,

Did you have a patch for this?

Comment 34 Mike Burns 2015-02-03 17:15:31 UTC
*** Bug 1183053 has been marked as a duplicate of this bug. ***

Comment 35 Jiri Stransky 2015-02-03 17:19:08 UTC
Yeah i do

https://github.com/theforeman/staypuft/pull/414

I'll amend it to address the pluginsync timeout issue

https://github.com/theforeman/staypuft/pull/414#issuecomment-71828263

Comment 36 Jiri Stransky 2015-02-05 10:20:35 UTC
Pull requests to both staypuft and installer, ready for review:

https://github.com/theforeman/staypuft/pull/414

https://github.com/theforeman/foreman-installer-staypuft/pull/133


The Staypuft pull request removes the necessity to compile the final catalog on the registration puppet run, which removes the errors.

Even after that, on slow and/or virtualized environments, pluginsync puppet phase can time out, which causes a different error. The pluginsync is supposed to happen fast and the timeout is not configurable. The only reasonable fix seems to be just to run the client-installer twice to let the pluginsync finish even on slower environments. In extreme cases running the installer twice might not be enough for the pluginsync to finish, but the solution should ideally be giving the puppet master more power (mainly disk IO and networking) rather than adding even more client-installer runs.

Comment 38 Omri Hochman 2015-02-19 18:57:28 UTC
verified with rhel-osp-installer-0.5.5-5.el7ost.noarch.

Comment 40 errata-xmlrpc 2015-03-05 18:18:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0641.html