Description of problem: Heat in the overcloud is not deployed in a working fashion. Version-Release number of selected component (if applicable): OSP10 How reproducible: yes Steps to Reproduce: 1. Deploy OSP10 overcloud 2. Attempt to deploy a heat template into the overcloud 3. SoftwareDeployments hang until timeout. Actual results: Heat deployments to the overcloud do not work. They hang waiting for software deployment to complete. Expected results: Successful deployment of a heat template to overcloud. Additional info: On the deployed VM, /etc/os-collect-config.conf points to localhost for metadata_url (it should point to the actual heat metadata server). Here is an example: VM$ cat /etc/os-collect-config.conf [DEFAULT] command = os-refresh-config collectors = ec2 collectors = cfn collectors = local [cfn] metadata_url = http://127.0.0.1:8000/v1/ stack_name = ddc2-DDC_user_nodes-rvamg4t7y26y-0-g7z7ghinvuei secret_access_key = 59209bf607864ce586e71ced8af8c1a4 access_key_id = 94adc0e6a016470492846611b74afd73 path = server.Metadata Fix is to change the /etc/heat/heat.conf to actually set heat_metadata_server_url, heat_waitcondition_server_url, heat_watch_server_url as shown below... In this case, I just commented what was configured (127.0.0.1) out and added my details. ------------------------------ # URL of the Heat metadata server. NOTE: Setting this is only needed if you # require instances to use a different endpoint than in the keystone catalog # (string value) #heat_metadata_server_url = <None> #heat_metadata_server_url = http://127.0.0.1:8000 heat_metadata_server_url = http://172.27.156.236:8000 # URL of the Heat waitcondition server. (string value) #heat_waitcondition_server_url = <None> #heat_waitcondition_server_url = http://127.0.0.1:8000/v1/waitcondition heat_waitcondition_server_url = http://172.27.156.236:8000/v1/waitcondition # URL of the Heat CloudWatch server. (string value) #heat_watch_server_url = #heat_watch_server_url =http://127.0.0.1:8003 heat_watch_server_url =http://172.27.156.236:8003 ------------------------------- Before doing the above, I followed the comments and checked the keystone catalog for the endpoints. They are correct and shown below. So something didn't work. [overcloud]$ os endpoint show heat +--------------+---------------------------------------------+ | Field | Value | +--------------+---------------------------------------------+ | adminurl | http://172.27.104.107:8004/v1/%(tenant_id)s | | enabled | True | | id | dd5640eaa00b47d29b6e8bf1d73a8270 | | internalurl | http://172.27.104.107:8004/v1/%(tenant_id)s | | publicurl | http://172.27.156.236:8004/v1/%(tenant_id)s | | region | regionOne | | service_id | e9a0f10721944d68ae57ce3859156ab6 | | service_name | heat | | service_type | orchestration | +--------------+---------------------------------------------+ [overcloud]$ os endpoint show heat-cfn +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | adminurl | http://172.27.104.107:8000/v1 | | enabled | True | | id | 240b54ed31574c26bbb9d3ab419e6f7d | | internalurl | http://172.27.104.107:8000/v1 | | publicurl | http://172.27.156.236:8000/v1 | | region | regionOne | | service_id | fbf47b2c65eb40d1861d41017d30919c | | service_name | heat-cfn | | service_type | cloudformation | +--------------+----------------------------------+ Once fixed, the new deployment of VMs have the following in /etc/os-collect-config.conf VM # cat /etc/os-collect-config.conf [DEFAULT] command = os-refresh-config collectors = ec2 collectors = cfn collectors = local [cfn] metadata_url = http://172.27.156.236:8000/v1/ stack_name = ddc-DDC_user_nodes-3iy5bofdrmq5-0-ju46n4q3r3a6 secret_access_key = ed3a6e61e6544db496cac97ad5619a04 access_key_id = 83888269ab2d4355bd9384397a1cc4fe path = server.Metadata
Let's verify puppet had the wrong value to begin with when the controller got deployed. Please paste the output of: grep 'heat.*url' /etc/puppet/hieradata/* from the overcloud controller (or upload the contents of that directory to this bz). Thanks.
[root@overcloud-controller-0 hieradata]# grep 'heat.*url' /etc/puppet/hieradata/* /etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth::admin_url: http://172.27.104.107:8004/v1/%(tenant_id)s /etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth::internal_url: http://172.27.104.107:8004/v1/%(tenant_id)s /etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth::public_url: http://172.27.156.236:8004/v1/%(tenant_id)s /etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth_cfn::admin_url: http://172.27.104.107:8000/v1 /etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth_cfn::internal_url: http://172.27.104.107:8000/v1 /etc/puppet/hieradata/service_configs.yaml:heat::keystone::auth_cfn::public_url: http://172.27.156.236:8000/v1 /etc/puppet/hieradata/service_configs.yaml:heat::keystone::authtoken::auth_url: http://172.27.157.59:35357
Upstream newton backport request: https://review.openstack.org/439699
Comments on the upstream backport suggest that this should have been fixed already by https://review.openstack.org/#/c/402401/ which is included in the package puppet-heat-9.4.1-2.el7ost.
Can you confirm whether updating to puppet-heat-9.4.1-2.el7ost resolves the issue?
Unfortunately all of my HW is in an OpenStack right now. I can't unstack and stack because of all the customization I had to do after the deploy. When I get to a point where I can do that, I will.
Bug 1452677 suggests the in addition to puppet setting it to localhost in heat.conf, heat-dist.conf is also doing the same.
*** This bug has been marked as a duplicate of bug 1452677 ***
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days