Bug 1302535 - rhel-osp-director: Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `collect' for nil:NilClass Error: Could not prefetch keystone_ > role provider 'openstack': undefined method `collect' for nil:NilClass [NEEDINFO]
rhel-osp-director: Error: Could not prefetch keystone_tenant provider 'openst...
Status: CLOSED WORKSFORME
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
high Severity unspecified
: y3
: 7.0 (Kilo)
Assigned To: Emilien Macchi
Shai Revivo
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-28 00:47 EST by Alexander Chuzhoy
Modified: 2017-09-02 19:32 EDT (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-10 10:49:36 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
morazi: needinfo? (jmelvin)
morazi: needinfo? (rcernin)


Attachments (Terms of Use)
logs from one controller (3.49 MB, application/x-gzip)
2016-01-28 00:50 EST, Alexander Chuzhoy
no flags Details
Logs from a controller where puppet run failed. (2.83 MB, text/plain)
2016-02-08 22:18 EST, Qasim Sarfraz
no flags Details

  None (edit)
Description Alexander Chuzhoy 2016-01-28 00:47:47 EST
rhel-osp-director: 7.3 HA overcloud deployment with SSL +IPv6 from SAT5 fails with:   
Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `collect' for nil:NilClass  Error: Could not prefetch keystone_ > role provider 'openstack': undefined method `collect' for nil:NilClass


Environment:
instack-undercloud-2.1.2-37.el7ost.noarch      
openstack-tripleo-heat-templates-0.8.6-112.el7ost.noarch    


Steps to reproduce:
Attempted to deploy 7.3 overcloud from SAT5 with IPv6 and SSL on Bare Metal.

Result:
The deployment failed.Deployment exited with non-zero status code: 6

Heat shows:
    "deploy_stderr": "Device \"br_ex\" does not exist.\nDevice \"br_int\" does not exist.\nDevice \"br_nic2\" does not exist.\nDevice \"br_nic4\" does not exist.\nDevice |
\"ovs_system\" does not exist.\n\u001b[1;31mWarning: Scope(Class[Keystone]): Execution of db_sync does not depend on $enabled anymore. Please use sync_db instead.\u001b[0|
m\n\u001b[1;31mWarning: Scope(Class[Glance::Registry]): Execution of db_sync does not depend on $manage_service or $enabled anymore. Please use sync_db instead.\u001b[0m\|
n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_host'; class ::nova::compute has not been eval|
uated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_protocol'; class ::nova::comput|
e has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_port'; class|
 ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncpro|
xy_path'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Concat::Setup]): concat::setup is deprecated as a public API of the conc|
at module and should no longer be directly included in the manifest.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6002]): The default incoming_chmod set to|
 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6002]): The default outgoin
g_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server[6001]): The d
efault incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::Storage::Server|
[6001]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning: Scope(Swift::S
torage::Server[6000]): The default incoming_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[1;31mWarning:
Scope(Swift::Storage::Server[6000]): The default outgoing_chmod set to 0644 may yield in error prone directories and will be changed in a later release.\u001b[0m\n\u001b[
1;31mError: Could not prefetch keystone_tenant provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_
role provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_user provider 'openstack': undefined metho
d `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Could not evaluate: undefined method `empty
?' for nil:NilClass\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Heat::Keystone::Domain/Exec[heat_domain_create]: Skipping because of failed dependencies\u001b[0m\n",
    "deploy_status_code": 6
  },


Expected result:
successful deployment.
Comment 2 Alexander Chuzhoy 2016-01-28 00:50 EST
Created attachment 1119008 [details]
logs from one controller
Comment 4 James Slagle 2016-01-28 10:18:34 EST
emilien, can you take this one and at least give an initial interpretation of the puppet error message?
Comment 5 Emilien Macchi 2016-01-28 10:31:20 EST
Of course James, it sounds like to me something with python-openstackclient, which could be on the wrong version.

puppet-keystone modules has Puppet providers to manage some resources, like users, tenants, domains, services, endpoints, etc.
The providers are just using python-openstackclient CLI
So I'm curious to know the version of this package before and after the upgrade, and make sure there is nothing wrong in there.

I've seen the case where openstackclient was too old to support what we do in the providers... It might be related.
Comment 6 Alexander Chuzhoy 2016-01-28 10:56:03 EST
The issue is transient. Re-deployed exactly the same way and didn't reproduce the issue.
Comment 7 Alexander Chuzhoy 2016-01-28 12:31:50 EST
On controller I get: python-openstackclient-1.0.3-3.el7ost.noarch 
There was no upgrade - plain deployment.
Comment 8 Angus Thomas 2016-02-03 05:29:44 EST
Please reopen this if the problem recurs.
Comment 9 Qasim Sarfraz 2016-02-08 22:18 EST
Created attachment 1122335 [details]
Logs from a controller where puppet run failed.

I hitting the same issue, any workaround for this? I have attached the /var/log/messages for one of the controlle
Comment 10 David Hill 2016-03-02 18:14:53 EST
I'm hitting the same issue with a customer.  I'm investigating this right now.
Comment 11 Qasim Sarfraz 2016-03-03 05:07:21 EST
(In reply to David Hill from comment #10)
> I'm hitting the same issue with a customer.  I'm investigating this right
> now.

Hi David,

I was able to fix this on my setup. For me the root cause was keystone failure. Keystone wasn't working in the deployment due to PNI connectivity issue. After fixing that I had a clean deployment. Might be helpful for you.
Comment 12 Jeremy 2016-03-11 17:46:54 EST
//notes

[stack@kl13322d ~(stackrc]$ heat stack-list --show-nested -f "status=FAILED"
+--------------------------------------+---------------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+
| id                                   | stack_name                                                                                                    | stack_status  | creation_time        | parent                               |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+
| 410cceca-ea61-433e-8a01-ef832e988b00 | overcloud                                                                                                     | UPDATE_FAILED | 2015-12-16T11:00:11Z | None                                 |
| 2817ad52-99b9-4fa4-a712-1d95f18bfeef | overcloud-CephStorage-vqifmxabyldx                                                                            | UPDATE_FAILED | 2015-12-16T11:00:22Z | 410cceca-ea61-433e-8a01-ef832e988b00 |
| 23dd8002-ec81-4cf7-9582-450b77dd8d64 | overcloud-CephStorage-vqifmxabyldx-1-ws4mnwkofuen                                                             | UPDATE_FAILED | 2015-12-16T11:00:25Z | 2817ad52-99b9-4fa4-a712-1d95f18bfeef |
| d50f21f2-f726-4317-836f-644d19ce89e1 | overcloud-CephStorage-vqifmxabyldx-2-v7vah5wu5s6m                                                             | UPDATE_FAILED | 2015-12-16T11:00:30Z | 2817ad52-99b9-4fa4-a712-1d95f18bfeef |
| d740c6c5-e348-41bb-909e-4512bd853f67 | overcloud-CephStorage-vqifmxabyldx-0-vl6fq747odnw                                                             | UPDATE_FAILED | 2015-12-16T11:00:33Z | 2817ad52-99b9-4fa4-a712-1d95f18bfeef |
| e8e889f1-3ae1-4fa0-9f58-76fae54f4c62 | overcloud-Compute-krp4c3xrzsk6                                                                                | UPDATE_FAILED | 2015-12-16T11:00:33Z | 410cceca-ea61-433e-8a01-ef832e988b00 |
| 16d17b31-6f8c-4a1b-b342-f10bede227e0 | overcloud-Controller-rxp5mkblqs5o                                                                             | UPDATE_FAILED | 2015-12-16T11:00:37Z | 410cceca-ea61-433e-8a01-ef832e988b00 |
| 36c4f712-9caa-428c-a3ae-aea582ab76de | overcloud-Compute-krp4c3xrzsk6-1-2pirrnaykk5x                                                                 | UPDATE_FAILED | 2015-12-16T11:00:37Z | e8e889f1-3ae1-4fa0-9f58-76fae54f4c62 |
| ae777abe-54b4-42d7-862e-4a95994926ea | overcloud-Controller-rxp5mkblqs5o-0-ajjrv4cusl4u                                                              | UPDATE_FAILED | 2015-12-16T11:00:46Z | 16d17b31-6f8c-4a1b-b342-f10bede227e0 |
| e1381c23-1f24-4a09-9c7b-beb9c6151e22 | overcloud-Controller-rxp5mkblqs5o-2-gvv3fma3fnhy                                                              | UPDATE_FAILED | 2015-12-16T11:00:51Z | 16d17b31-6f8c-4a1b-b342-f10bede227e0 |
| 7a9b8e0f-54b1-4fbb-8199-ff264ca91764 | overcloud-Controller-rxp5mkblqs5o-1-fhorswcbu4ml                                                              | UPDATE_FAILED | 2015-12-16T11:00:57Z | 16d17b31-6f8c-4a1b-b342-f10bede227e0 |
| be2aaf14-cab5-4a77-843d-e5b940a98b6e | overcloud-ControllerNodesPostDeployment-x7ivqldepski                                                          | UPDATE_FAILED | 2015-12-16T11:18:10Z | 410cceca-ea61-433e-8a01-ef832e988b00 |
| 41757349-0f18-4fd9-82a7-25d9f701cf81 | overcloud-ControllerNodesPostDeployment-x7ivqldepski-ControllerOvercloudServicesDeployment_Step7-mbgjvq6dwrxr | UPDATE_FAILED | 2015-12-16T11:28:05Z | be2aaf14-cab5-4a77-843d-e5b940a98b6e |
| b1cf186c-ccb0-454e-b981-af5c821ee203 | overcloud-Compute-krp4c3xrzsk6-3-hf6vd3meouvn                                                                 | UPDATE_FAILED | 2015-12-21T14:56:40Z | e8e889f1-3ae1-4fa0-9f58-76fae54f4c62 |


[stack@kl13322d ~(stackrc]$ heat resource-list 41757349-0f18-4fd9-82a7-25d9f701cf81
+---------------+--------------------------------------+--------------------------------+-----------------+----------------------+
| resource_name | physical_resource_id                 | resource_type                  | resource_status | updated_time         |
+---------------+--------------------------------------+--------------------------------+-----------------+----------------------+
| 0             | 004a1911-51a0-4781-89d4-3720da2e7af2 | OS::Heat::StructuredDeployment | UPDATE_COMPLETE | 2016-03-02T06:55:37Z |
| 2             | f6601bad-c57e-4e94-a466-4e904c363a0c | OS::Heat::StructuredDeployment | UPDATE_FAILED   | 2016-03-02T06:55:38Z |
| 1             | 8a1e3bcf-5e44-4825-aa6c-1df752d6ab4e | OS::Heat::StructuredDeployment | UPDATE_COMPLETE | 2016-03-02T06:55:41Z |
+---------------+--------------------------------------+--------------------------------+-----------------+----------------------+

[stack@kl13322d ~(stackrc]$ heat deployment-show f6601bad-c57e-4e94-a466-4e904c363a0c
{
  "status": "FAILED",
  "server_id": "8cf3c9c2-609b-44a7-ba40-8cb81b95898a",
  "config_id": "6c725e03-be42-47b0-af39-e0906d3b9889",
  "output_values": {
    "deploy_stdout": "\u001b[mNotice: Compiled catalog for ams-overcloud-controller-2.localdomain in environment production in 8.41 seconds\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_controller_pacemaker6]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Main/Exec[galera-ready]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}c07b9a377faea45b96b7d3bf8976004b' to '{md5}deaf33b08dcb3bf5ea32086c5cc936dd'\u001b[0m\n\u001b[mNotice: /File[/etc/ntp.conf]/seltype: seltype changed 'etc_t' to 'net_conf_t'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ntp::Service/Service[ntp]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/File[/tmp/ceph-mon-keyring-ams-overcloud-controller-2]/ensure: defined content as '{md5}3c0a39c0d0d1cc6271267a0c7dd2862f'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Exec[ceph-mon-mkfs-ams-overcloud-controller-2]/returns: ++ ceph-mon --id ams-overcloud-controller-2 --show-config-value mon_data\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Exec[ceph-mon-mkfs-ams-overcloud-controller-2]/returns: + mon_data=/var/lib/ceph/mon/ceph-ams-overcloud-controller-2\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Exec[ceph-mon-mkfs-ams-overcloud-controller-2]/returns: + '[' '!' -d /var/lib/ceph/mon/ceph-ams-overcloud-controller-2 ']'\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Exec[ceph-mon-mkfs-ams-overcloud-controller-2]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Exec[rm-keyring-ams-overcloud-controller-2]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Exec[ceph-mon-ceph.client.admin.keyring-ams-overcloud-controller-2]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Ceph::Profile::Mon/Ceph::Mon[ams-overcloud-controller-2]/Service[ceph-mon-ams-overcloud-controller-2]/ensure: ensure changed 'stopped' to 'running'\u001b[0m\n\u001b[mNotice: Puppet::Provider::Openstack: project service is unavailable. Will retry for up to 9 seconds.\u001b[0m\n\u001b[mNotice: Puppet::Type::Keystone_tenant::ProviderOpenstack: project service is unavailable. Will retry for up to 9 seconds.\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: created\u001b[0m\n\u001b[mNotice: Puppet::Type::Keystone_tenant::ProviderOpenstack: project service is unavailable. Will retry for up to 9 seconds.\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[services]/ensure: created\u001b[0m\n\u001b[mNotice: Puppet::Provider::Openstack: role service is unavailable. Will retry for up to 10 seconds.\u001b[0m\n\u001b[mNotice: Puppet::Type::Keystone_role::ProviderOpenstack: role service is unavailable. Will retry for up to 10 seconds.\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: created\u001b[0m\n\u001b[mNotice: Puppet::Provider::Openstack: user service is unavailable. Will retry for up to 10 seconds.\u001b[0m\n\u001b[mNotice: Puppet::Type::Keystone_user::ProviderOpenstack: user service is unavailable. Will retry for up to 9 seconds.\u001b[0m\n\u001b[mNotice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]/ensure: created\u001b[0m\n\u001b[mNotice: Puppet::Provider::Openstack: user role service is unavailable. Will retry for up to 10 seconds.\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Keystone::Domain/Exec[heat_domain_create]: Dependency Keystone_user_role[admin@admin] has failures: true\u001b[0m\n\u001b[mNotice: /Stage[main]/Pacemaker::Corosync/Exec[enable-not-start-tripleo_cluster]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Pacemaker::Corosync/Exec[Set password for hacluster user on tripleo_cluster]/returns: executed successfully\u001b[0m\n\u001b[mNotice: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/returns: executed successfully\u001b[0m\n\u001b[mNotice: Pacemaker has reported quorum achieved\u001b[0m\n\u001b[mNotice: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]/message: defined 'message' as 'Pacemaker has reported quorum achieved'\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Engine/Service[heat-engine]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api/Service[heat-api]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: /Stage[main]/Heat::Api_cloudwatch/Service[heat-api-cloudwatch]: Triggered 'refresh' from 1 events\u001b[0m\n\u001b[mNotice: Finished catalog run in 110.12 seconds\u001b[0m\n",
    "deploy_stderr": "Device \"br_ex\" does not exist.\nDevice \"br_int\" does not exist.\nDevice \"br_tun\" does not exist.\nDevice \"ovs_system\" does not exist.\n\u001b[1;31mWarning: Scope(Class[Keystone]): Execution of db_sync does not depend on $enabled anymore. Please use sync_db instead.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Glance::Registry]): Execution of db_sync does not depend on $manage_service or $enabled anymore. Please use sync_db instead.\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_host'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_protocol'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_port'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_path'; class ::nova::compute has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Concat::Setup]): concat::setup is deprecated as a public API of the concat module and should no longer be directly included in the manifest.\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_tenant provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_role provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_user provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Could not evaluate: undefined method `empty?' for nil:NilClass\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Heat::Keystone::Domain/Exec[heat_domain_create]: Skipping because of failed dependencies\u001b[0m\n",
    "deploy_status_code": 6
  },
  "creation_time": "2015-12-16T11:28:07Z",
  "updated_time": "2016-03-02T06:58:03Z",
  "input_values": {},
  "action": "UPDATE",
  "status_reason": "deploy_status_code : Deployment exited with non-zero status code: 6",
  "id": "f6601bad-c57e-4e94-a466-4e904c363a0c"


****did yum update on the director and updated about 52 packages. Re ran the update deploy command and still failed out with the same error.

*** went over to the controller, pcs cluster stop --all , wait, pcs cluster start --all , wait, wait ,wait. Once everything was back up to normal  I reran the update deploy command. This time it went along through many breakpoints.

***less failed resources

[stack@kl13322d ~(stackrc]$  heat stack-list --show-nested -f "status=FAILED"
+--------------------------------------+---------------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+
| id                                   | stack_name                                                                                                    | stack_status  | creation_time        | parent                               |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+
| 410cceca-ea61-433e-8a01-ef832e988b00 | overcloud                                                                                                     | UPDATE_FAILED | 2015-12-16T11:00:11Z | None                                 |
| 16d17b31-6f8c-4a1b-b342-f10bede227e0 | overcloud-Controller-rxp5mkblqs5o                                                                             | UPDATE_FAILED | 2015-12-16T11:00:37Z | 410cceca-ea61-433e-8a01-ef832e988b00 |
| ae777abe-54b4-42d7-862e-4a95994926ea | overcloud-Controller-rxp5mkblqs5o-0-ajjrv4cusl4u                                                              | UPDATE_FAILED | 2015-12-16T11:00:46Z | 16d17b31-6f8c-4a1b-b342-f10bede227e0 |
| 7a9b8e0f-54b1-4fbb-8199-ff264ca91764 | overcloud-Controller-rxp5mkblqs5o-1-fhorswcbu4ml                                                              | UPDATE_FAILED | 2015-12-16T11:00:57Z | 16d17b31-6f8c-4a1b-b342-f10bede227e0 |
| be2aaf14-cab5-4a77-843d-e5b940a98b6e | overcloud-ControllerNodesPostDeployment-x7ivqldepski                                                          | UPDATE_FAILED | 2015-12-16T11:18:10Z | 410cceca-ea61-433e-8a01-ef832e988b00 |
| 41757349-0f18-4fd9-82a7-25d9f701cf81 | overcloud-ControllerNodesPostDeployment-x7ivqldepski-ControllerOvercloudServicesDeployment_Step7-mbgjvq6dwrxr | UPDATE_FAILED | 2015-12-16T11:28:05Z | be2aaf14-cab5-4a77-843d-e5b940a98b6e |
+--------------------------------------+---------------------------------------------------------------------------------------------------------------+---------------+----------------------+--------------------------------------+


**still same error on the controller resource
Comment 13 Emilien Macchi 2016-03-16 09:42:59 EDT
it sounds like to me an issue with Keystone service or the version of openstackclient, but I can't tell what is the bug with just the logs... Is there a way to reproduce it quickly or having access to platform?
Comment 14 Mike Orazi 2016-03-16 16:49:37 EDT
Do we have a reproducer for this?
Comment 18 jliberma@redhat.com 2017-09-02 19:32:17 EDT
For posterity, I came across this issue and was able to resolve it by rebuilding  the CA trust database and redeploying.

https://access.redhat.com/solutions/1549003

I think the initial problem was caused by copying the overcloud-cacert.pem as root but runing update-ca-trust extract as non-root.

mkdir -p /root/cert.bak
rpm -Vv ca-certificates | awk '$1!="........." && $2!="d" {system("mv -v " $NF " /root/cert.bak")}'
ls /root/cert.bak/
yum check-update ca-certificates; (($?==100)) && yum update ca-certificates || yum reinstall ca-certificates
rm /etc/pki/ca-trust/source/anchors/overcloud-cacert.pem
find /etc/pki/ca-trust/source{,/anchors} -maxdepth 1 -not -type d -exec ls -1 {} +
cp /home/stack/templates/overcloud-cacert.pem /etc/pki/ca-trust/source/anchors/
ls /etc/pki/ca-trust/source/anchors/
update-ca-trust extract

As a troubleshooting step I redeployed without TLS environment files, which worked. Then tried again with un-modified TLS files which failed with same error. Then rebuilt the CA store and redeployed (no modifications to files) which worked.  My environment was OSP 8.

Note You need to log in before you can comment on or make changes to this bug.