Bug 1425189 - OSP10 doesn't deploy Heat (for the overcloud) properly
Summary: OSP10 doesn't deploy Heat (for the overcloud) properly
Keywords:
Status: CLOSED DUPLICATE of bug 1452677
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Thomas Hervé
QA Contact: Amit Ugol
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-20 18:26 UTC by Ed Balduf
Modified: 2023-09-14 03:53 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-30 16:25:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1653985 0 None None None 2017-03-01 16:43:49 UTC
Red Hat Bugzilla 1452677 0 high CLOSED overcloud heat metadata endpoints are incorrectly set to localhost 2021-02-22 00:41:40 UTC

Internal Links: 1452677

Description Ed Balduf 2017-02-20 18:26:09 UTC
Description of problem: Heat in the overcloud is not deployed in a working fashion. 


Version-Release number of selected component (if applicable): OSP10


How reproducible: yes


Steps to Reproduce:
1. Deploy OSP10 overcloud
2. Attempt to deploy a heat template into the overcloud
3. SoftwareDeployments hang until timeout. 

Actual results: Heat deployments to the overcloud do not work. They hang waiting for software deployment to complete.  


Expected results: Successful deployment of a heat template to overcloud. 


Additional info: On the deployed VM, /etc/os-collect-config.conf points to localhost for metadata_url (it should point to the actual heat metadata server). Here is an example:

VM$ cat /etc/os-collect-config.conf
[DEFAULT]
command = os-refresh-config
collectors = ec2
collectors = cfn
collectors = local

[cfn]
metadata_url = http://127.0.0.1:8000/v1/
stack_name = ddc2-DDC_user_nodes-rvamg4t7y26y-0-g7z7ghinvuei
secret_access_key = 59209bf607864ce586e71ced8af8c1a4
access_key_id = 94adc0e6a016470492846611b74afd73
path = server.Metadata

Fix is to change the /etc/heat/heat.conf to actually set heat_metadata_server_url, heat_waitcondition_server_url, heat_watch_server_url as shown below... In this case, I just commented what was configured (127.0.0.1) out and added my details. 

------------------------------
# URL of the Heat metadata server. NOTE: Setting this is only needed if you
# require instances to use a different endpoint than in the keystone catalog
# (string value)
#heat_metadata_server_url = <None>
#heat_metadata_server_url = http://127.0.0.1:8000
heat_metadata_server_url = http://172.27.156.236:8000

# URL of the Heat waitcondition server. (string value)
#heat_waitcondition_server_url = <None>
#heat_waitcondition_server_url = http://127.0.0.1:8000/v1/waitcondition
heat_waitcondition_server_url = http://172.27.156.236:8000/v1/waitcondition

# URL of the Heat CloudWatch server. (string value)
#heat_watch_server_url =
#heat_watch_server_url =http://127.0.0.1:8003
heat_watch_server_url =http://172.27.156.236:8003
-------------------------------

Before doing the above, I followed the comments and checked the keystone catalog for the endpoints.  They are correct and shown below. So something didn't work. 

[overcloud]$ os endpoint show heat
+--------------+---------------------------------------------+
| Field        | Value                                       |
+--------------+---------------------------------------------+
| adminurl     | http://172.27.104.107:8004/v1/%(tenant_id)s |
| enabled      | True                                        |
| id           | dd5640eaa00b47d29b6e8bf1d73a8270            |
| internalurl  | http://172.27.104.107:8004/v1/%(tenant_id)s |
| publicurl    | http://172.27.156.236:8004/v1/%(tenant_id)s |
| region       | regionOne                                   |
| service_id   | e9a0f10721944d68ae57ce3859156ab6            |
| service_name | heat                                        |
| service_type | orchestration                               |
+--------------+---------------------------------------------+
[overcloud]$ os endpoint show heat-cfn
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| adminurl     | http://172.27.104.107:8000/v1    |
| enabled      | True                             |
| id           | 240b54ed31574c26bbb9d3ab419e6f7d |
| internalurl  | http://172.27.104.107:8000/v1    |
| publicurl    | http://172.27.156.236:8000/v1    |
| region       | regionOne                        |
| service_id   | fbf47b2c65eb40d1861d41017d30919c |
| service_name | heat-cfn                         |
| service_type | cloudformation                   |
+--------------+----------------------------------+

Once fixed, the new deployment of VMs have the following in /etc/os-collect-config.conf
 
VM # cat /etc/os-collect-config.conf
[DEFAULT]
command = os-refresh-config
collectors = ec2
collectors = cfn
collectors = local

[cfn]
metadata_url = http://172.27.156.236:8000/v1/
stack_name = ddc-DDC_user_nodes-3iy5bofdrmq5-0-ju46n4q3r3a6
secret_access_key = ed3a6e61e6544db496cac97ad5619a04
access_key_id = 83888269ab2d4355bd9384397a1cc4fe
path = server.Metadata

Comment 1 Crag Wolfe 2017-02-28 16:18:04 UTC
Let's verify puppet had the wrong value to begin with when the controller got deployed. Please paste the output of:
grep 'heat.*url' /etc/puppet/hieradata/* from the overcloud controller (or upload the contents of that directory to this bz). Thanks.

Comment 2 Ed Balduf 2017-02-28 16:50:05 UTC
[root@overcloud-controller-0 hieradata]# grep 'heat.*url' /etc/puppet/hieradata/*
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth::admin_url: http://172.27.104.107:8004/v1/%(tenant_id)s
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth::internal_url: http://172.27.104.107:8004/v1/%(tenant_id)s
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth::public_url: http://172.27.156.236:8004/v1/%(tenant_id)s
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth_cfn::admin_url: http://172.27.104.107:8000/v1
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth_cfn::internal_url: http://172.27.104.107:8000/v1
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth_cfn::public_url: http://172.27.156.236:8000/v1
/etc/puppet/hieradata/service_configs.yaml:heat::keystone::authtoken::auth_url: http://172.27.157.59:35357

Comment 3 Crag Wolfe 2017-03-01 16:43:50 UTC
Upstream newton backport request: https://review.openstack.org/439699

Comment 4 Zane Bitter 2017-03-06 21:14:00 UTC
Comments on the upstream backport suggest that this should have been fixed already by https://review.openstack.org/#/c/402401/ which is included in the package puppet-heat-9.4.1-2.el7ost.

Comment 5 Zane Bitter 2017-03-09 15:23:59 UTC
Can you confirm whether updating to puppet-heat-9.4.1-2.el7ost resolves the issue?

Comment 6 Ed Balduf 2017-03-09 22:29:30 UTC
Unfortunately all of my HW is in an OpenStack right now. I can't unstack and stack because of all the customization I had to do after the deploy. When I get to a point where I can do that, I will.

Comment 9 Zane Bitter 2017-05-19 14:53:11 UTC
Bug 1452677 suggests the in addition to puppet setting it to localhost in heat.conf, heat-dist.conf is also doing the same.

Comment 10 Zane Bitter 2017-05-30 16:25:07 UTC

*** This bug has been marked as a duplicate of bug 1452677 ***

Comment 11 Red Hat Bugzilla 2023-09-14 03:53:57 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.