Red Hat Bugzilla – Bug 1295740
Tripleo client is not creating the overcloudrc
Last modified: 2016-04-07 17:44:29 EDT
Description of problem: Tripleo client is not creating the overcloudrc on the /home/stack directory Version-Release number of selected component (if applicable): How reproducible: Very. All deployments on director-8 are showing this Steps to Reproduce: 1. Do an overcloud deployment 2. check to see if /home/stack/overcloudrc is there Actual results: stderr: ls: /home/stack/overcloudrc: No such file or directory Expected results: /home/stack/overcloudrc should exist Additional info:
@Adriano, Please provide the logs, and poodle version, yum repolist etc.
Created attachment 1111921 [details] Overcloud deploy log It was ran on 8_director rhel-7.2 latest poodle as of 4th of January 2016 it was set with this command rhos-release -d -P 8-director; rhos-release -r 7.2 -t /home/stack/DIB -d -P 8-director;
On a better look at the logs there's some errors. Although the overcloud deploy returns 0 it has probably failed 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Warning: The flavor selected for --control-flavor "baremetal" has no profile associated 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Recommendation: assign a profile with openstack flavor set --property "capabilities:profile"="PROFILE_NAME" baremetal 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Warning: The flavor selected for --compute-flavor "baremetal" has no profile associated 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Recommendation: assign a profile with openstack flavor set --property "capabilities:profile"="PROFILE_NAME" baremetal 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Warning: The flavor selected for --ceph-storage-flavor "baremetal" has no profile associated 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Recommendation: assign a profile with openstack flavor set --property "capabilities:profile"="PROFILE_NAME" baremetal 2016-01-04 15:04:26.048 14913 DEBUG tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Skipping verification of baremetal profiles because none will be deployed 2016-01-04 15:04:26.048 14913 DEBUG tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Skipping verification of baremetal profiles because none will be deployed 2016-01-04 15:04:26.048 14913 WARNING tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] There are 3 ironic nodes with no profile that will not be used: 10c9094a-9244-4905-bc64-528a126f1ffc, a851a10c-5f65-4761-a826-1f702c9a9d7b, 172f1ed1-5981-4570-ba67-cad9da53c472 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Configuration has 3 errors, fix them before proceeding. Ignoring these errors is likely to lead to a failed deploy. 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Configuration has 1 warnings, fix them before proceeding. 2016-01-04 15:04:26.056 14913 DEBUG tripleoclient.plugin [ admin admin] Instantiating orchestration client: <class 'heatclient.v1.client.Client'> 2016-01-04 15:04:26.057 14913 DEBUG heatclient.common.http [ admin admin] curl -g -i -X GET -H 'X-Auth-Token: {SHA1}dc91579a82f7bd59a6ae64d807a02434c1e25289' -H 'Content-Type: application/json' -H 'X-Auth-Url: http://192.0.2.1:5000/v2.0' -H 'Accept: application/json' -H 'User-Agent: python-heatclient' http://192.0.2.1:8004/v1/e97f8728d84f4ccc805b9065f9c34eab/stacks/overcloud 2016-01-04 15:04:26.057 14913 INFO requests.packages.urllib3.connectionpool [ admin admin] Starting new HTTP connection (1): 192.0.2.1 2016-01-04 15:04:26.227 14913 DEBUG requests.packages.urllib3.connectionpool [ admin admin] "GET /v1/e97f8728d84f4ccc805b9065f9c34eab/stacks/overcloud HTTP/1.1" 404 585 2016-01-04 15:04:26.227 14913 DEBUG heatclient.common.http [ admin admin] HTTP/1.1 404 Not Found date: Mon, 04 Jan 2016 20:04:26 GMT connection: keep-alive content-type: application/json; charset=UTF-8 content-length: 585 x-openstack-request-id: req-ff77524d-ec19-4e7b-8fd7-8a551ec2b4c1 {"explanation": "The resource could not be found.", "code": 404, "error": {"message": "The Stack (overcloud) could not be found.", "traceback": "Traceback (most recent call last):\n\n File \"/usr/lib/python2.7/site-packages/heat/common/context.py\", line 305, in wrapped\n return func(self, ctx, *args, **kwargs)\n\n File \"/usr/lib/python2.7/site-packages/heat/engine/service.py\", line 430, in identify_stack\n raise exception.StackNotFound(stack_name=stack_name)\n\nStackNotFound: The Stack (overcloud) could not be found.\n", "type": "StackNotFound"}, "title": "Not Found"} 2016-01-04 15:04:26.227 14913 INFO tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] No stack found, will be doing a stack create 2016-01-04 15:04:26.227 14913 DEBUG tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Checking hypervisor stats 2016-01-04 15:04:26.228 14913 DEBUG keystoneclient.session [ admin admin] REQ: curl -g -i -X GET http://192.0.2.1:8774/v2/e97f8728d84f4ccc805b9065f9c34eab/os-hypervisors/statistics -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}dc91579a82f7bd59a6ae64d807a02434c1e25289" 2016-01-04 15:04:26.236 14913 DEBUG requests.packages.urllib3.connectionpool [ admin admin] "GET /v2/e97f8728d84f4ccc805b9065f9c34eab/os-hypervisors/statistics HTTP/1.1" 200 245 2016-01-04 15:04:26.237 14913 DEBUG keystoneclient.session [ admin admin] RESP: [200] date: Mon, 04 Jan 2016 20:04:26 GMT connection: keep-alive content-type: application/json content-length: 245 x-compute-request-id: req-a5fe1bc9-6d85-4cd7-aa8a-1c0d1119d396 RESP BODY: {"hypervisor_statistics": {"count": 0, "vcpus_used": 0, "local_gb_used": 0, "memory_mb": 0, "current_workload": 0, "vcpus": 0, "running_vms": 0, "free_disk_gb": 0, "disk_available_least": 0, "local_gb": 0, "free_ram_mb": 0, "memory_mb_used": 0}} 2016-01-04 15:04:26.237 14913 DEBUG openstackclient.shell [ admin admin] clean_up DeployOvercloud: 2016-01-04 15:04:26.237 14913 INFO openstackclient.shell [ admin admin] END return value: 0
These errors appear in successful runs, too. I think they are a red herring: 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Warning: The flavor selected for --control-flavor "baremetal" has no profile associated 2016-01-04 15:04:26.048 14913 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Recommendation: assign a profile with openstack flavor set --property "capabilities:profile"="PROFILE_NAME" baremetal
After hunting a bit more I can see that the error boils down to [stack@instack ~]$ nova hypervisor-stats +----------------------+-------+ | Property | Value | +----------------------+-------+ | count | 0 | | current_workload | 0 | | disk_available_least | 0 | | free_disk_gb | 0 | | free_ram_mb | 0 | | local_gb | 0 | | local_gb_used | 0 | | memory_mb | 0 | | memory_mb_used | 0 | | running_vms | 0 | | vcpus | 0 | | vcpus_used | 0 | +----------------------+-------+ For some reason the hypervisor is giving a count of 0.
(In reply to Steve Linabery from comment #5) > These errors appear in successful runs, too. I think they are a red herring: > > 2016-01-04 15:04:26.048 14913 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] Warning: > The flavor selected for --control-flavor "baremetal" has no profile > associated > 2016-01-04 15:04:26.048 14913 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [ admin admin] > Recommendation: assign a profile with openstack flavor set --property > "capabilities:profile"="PROFILE_NAME" baremetal you are right. I've did some manual changes and that error went away but the deploy still doesn't go through..
Just to be clear the baremetal thing was a redherring the relevant error is that nstantiating orchestration client: <class 'heatclient.v1.client.Client'> curl -g -i -X GET -H 'X-Auth-Token: {SHA1}b6acb344db9a4186acbd0d9e323ba0db5f07c783' -H 'Content-Type: application/json' -H 'X-Auth-Url: http://192.0.2.1:5000/v2.0' -H 'Accept: application/json' -H 'User-Agent: python-heatclient' http://192.0.2.1:8004/v1/c257201e347843fbb6a7a1b3ba5195fd/stacks/overcloud ...snip... No stack found, will be doing a stack create Checking hypervisor stats REQ: curl -g -i -X GET http://192.0.2.1:8774/v2/c257201e347843fbb6a7a1b3ba5195fd/os-hypervisors/statistics -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}b6acb344db9a4186acbd0d9e323ba0db5f07c783" "GET /v2/c257201e347843fbb6a7a1b3ba5195fd/os-hypervisors/statistics HTTP/1.1" 200 245 RESP: [200] date: Thu, 07 Jan 2016 10:20:43 GMT connection: keep-alive content-type: application/json content-length: 245 x-compute-request-id: req-ff756d91-cde9-46a7-8453-0f036cd7fdda RESP BODY: {"hypervisor_statistics": {"count": 0, "vcpus_used": 0, "local_gb_used": 0, "memory_mb": 0, "current_workload": 0, "vcpus": 0, "running_vms": 0, "free_disk_gb": 0, "disk_available_least": 0, "local_gb": 0, "free_ram_mb": 0, "memory_mb_used": 0}} Deployment failed: Expected hypervisor stats not met clean_up DeployOvercloud: END return value: 0 So from what I followed the tripleoclient utils.check_hypervisor_stats fails because nova hypervisor-status gives a count of 0
ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 5dac2173-452c-4d74-b8e5-ff0b7015af61 | None | None | power off | available | False | | 1b918b20-19ad-4c7c-8ba0-19ab1d9ccd26 | None | None | power off | available | False | | 2f7be663-e37f-422f-bbd3-2046dc7117e1 | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ ironic node-show 5dac2173-452c-4d74-b8e5-ff0b7015af61 +------------------------+-------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2016-01-06T18:04:12+00:00 | | maintenance_reason | None | | provision_state | available | | clean_step | {} | | uuid | 5dac2173-452c-4d74-b8e5-ff0b7015af61 | | console_enabled | False | | target_provision_state | None | | provision_updated_at | 2016-01-06T18:04:11+00:00 | | maintenance | False | | inspection_started_at | None | | inspection_finished_at | None | | power_state | power off | | driver | pxe_ssh | | reservation | None | | properties | {u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'40', | | | u'cpus': u'1', u'capabilities': u'profile:baremetal,boot_option:local'} | | instance_uuid | None | | name | None | | driver_info | {u'ssh_username': u'root', u'deploy_kernel': u'd8605812-9e56-4e08-a0cf- | | | 198238a8cc56', u'deploy_ramdisk': u'cd726f50-5f03-4e86-833f- | | | 2be262d16bfb', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- | | | M | | | IIEpAIBAAKCAQEA1+49dsPUbV1OYVOuk7TeY7DI7f6+V+KmCDtLiExrl4l0hqFj | | | N3tib+E | | | OyNI8bTW3jrEZY5SWuE+ovlkrgww/eHRluNvAW+92sJSz3zYOkxbtxqxZ | | | kBT5pNLUmjGZE | | | AgiPYaY6QtPBgFRZEIf3wixJ2ZFMaeyMeyTbwzSfHt9jsgORRux | | | 5h6mIAlandE/uetew4T | | | s84WO17HL6jfjBMRnM3k0AufnWq69ZKYncXPGR55JzMPM | | | zEGeDf/NSsOZPjLv9phtJMet5 | | | XJoHe0ne1+DtK+Eah4ayJ0nswD1oxPVvS2TNIA4 | | | fL36fXKtvBSh97qAfFCLgLHAk8ULqmF | | | GrJspQwIDAQABAoIBAQDEmne+JinJdfZC | | | qaW/eSQvkqwv/g0jCgtwXUS1khc4f3/ogRNMV | | | fxXr4v9j2ZTbWB2/IsIXycoI+eZ | | | gKkchc9YhmqD+RnY21yvBIYNyzNT+8F9bjmR9Xcj+o5 | | | 24ofhz/YVHPMm5RQLfLG1 | | | 3nsUYN6j5/tCbdTSnfvTVdJkYy/8CkBRUYbfe61Kl2m7YDTMC | | | ZEeLPBFK5IhlaS8 | | | 4mGYXcCF7yEv1OfNuadlWD+yEHC4JoA35H/xjllGjotoY0QGYe4/HKu | | | y5XzP9SVa | | | 9HSQzSNWvh5cSNW0qsnkzI5tT93YPPusBmic9tcy3HttvV+/an2MVF+cUalUg | | | S+K | | | f5QRZ3YJAoGBAPZCYNNgFSo9+21ti3+Ed7DfrIiY6gLtuzj3EzdHqVhbnS3lWqnK | | | V | | | l3K00m2cIA6lbSPeCv4jrnmbP7SJDcR9bH5kiEzpzMRVjz711vdzmovihN6pdf1 | | | dJUAlAJ | | | 23GNN9Sl/lWT6PvgxKtxpZEuTW71jMtfGXh0AZR9pum+pPEX3AoGBAOB4 | | | wPgUiIwnkdO6K | | | NNa2RB76BVnxl60BgjUP8LoJLYdZF/NUyK9JmaZIm6Nm+60UewV | | | hsybrRzxumEgmz74EP+ | | | 950MgnlV0bRNruztDKBVzAEZYUAguGHYBhZvjrCgdch3K | | | r3SvtEeSrf3arzc0CkaIfnezZ | | | 8zSJGwIb9QhVvQVAoGAfG92NdkrWLkRP25HOxxY | | | zst3h16dgPVX0aUn1Jslezxv5C0s7vc | | | LRUGkGm9R0bnIxABrHOzwUgeZs/nDywAM | | | Qu51ZMmRB0EPbqljXUxrbUSwZL9o5gNl8ZTlN | | | YmSO6u84kdR2kCJaSiPG+k1gOrp | | | RH3Hxzi5fbYhYwot5xaBGmkCgYBW8I4Ux6cnWYASmCy | | | Nx8cAqmz2NW5QabtYmhRk | | | AhON8Y5ZyKWlc8s2u6LpXLGDX5XHMYM3JiiDaGB+k+ltFxlGZ | | | B/5EN4iTOieOACZ | | | cGZLAHKfzD6bHHde5TIBccnQ866qOUGBmTfi7L2074kjgWVlciJBbFw | | | MqWtv6Eif | | | yZXrDQKBgQDSJlckxUoebZZF3b5uU0hUanJYYsUjJMUBKWTbRqKL4S4Oiro5k | | | RQn | | | pNfrA/co33Ce3lmLDCjjt5cvpGXzEVRNJyqx/0stts+8Y0+Hk2MOyC5QHokhYbXK | | | z | | | CFfwFVADiMKIdA8lXNYQVpXWDYko5gG9GKIbmMik8gdurF/K9iTDg== | | | -----END RSA | | | PRIVATE KEY----- | | | ', u'ssh_virt_type': u'virsh', u'ssh_address': | | | u'192.168.122.1'} | | created_at | 2016-01-06T15:39:30+00:00 | | driver_internal_info | {} | | chassis_uuid | | | instance_info | {} | +------------------------+-------------------------------------------------------------------------+ ironic node-show 1b918b20-19ad-4c7c-8ba0-19ab1d9ccd26 +------------------------+-------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2016-01-06T18:04:12+00:00 | | maintenance_reason | None | | provision_state | available | | clean_step | {} | | uuid | 1b918b20-19ad-4c7c-8ba0-19ab1d9ccd26 | | console_enabled | False | | target_provision_state | None | | provision_updated_at | 2016-01-06T18:04:12+00:00 | | maintenance | False | | inspection_started_at | None | | inspection_finished_at | None | | power_state | power off | | driver | pxe_ssh | | reservation | None | | properties | {u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'40', | | | u'cpus': u'1', u'capabilities': u'profile:baremetal,boot_option:local'} | | instance_uuid | None | | name | None | | driver_info | {u'ssh_username': u'root', u'deploy_kernel': u'd8605812-9e56-4e08-a0cf- | | | 198238a8cc56', u'deploy_ramdisk': u'cd726f50-5f03-4e86-833f- | | | 2be262d16bfb', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- | | | M | | | IIEpAIBAAKCAQEA1+49dsPUbV1OYVOuk7TeY7DI7f6+V+KmCDtLiExrl4l0hqFj | | | N3tib+E | | | OyNI8bTW3jrEZY5SWuE+ovlkrgww/eHRluNvAW+92sJSz3zYOkxbtxqxZ | | | kBT5pNLUmjGZE | | | AgiPYaY6QtPBgFRZEIf3wixJ2ZFMaeyMeyTbwzSfHt9jsgORRux | | | 5h6mIAlandE/uetew4T | | | s84WO17HL6jfjBMRnM3k0AufnWq69ZKYncXPGR55JzMPM | | | zEGeDf/NSsOZPjLv9phtJMet5 | | | XJoHe0ne1+DtK+Eah4ayJ0nswD1oxPVvS2TNIA4 | | | fL36fXKtvBSh97qAfFCLgLHAk8ULqmF | | | GrJspQwIDAQABAoIBAQDEmne+JinJdfZC | | | qaW/eSQvkqwv/g0jCgtwXUS1khc4f3/ogRNMV | | | fxXr4v9j2ZTbWB2/IsIXycoI+eZ | | | gKkchc9YhmqD+RnY21yvBIYNyzNT+8F9bjmR9Xcj+o5 | | | 24ofhz/YVHPMm5RQLfLG1 | | | 3nsUYN6j5/tCbdTSnfvTVdJkYy/8CkBRUYbfe61Kl2m7YDTMC | | | ZEeLPBFK5IhlaS8 | | | 4mGYXcCF7yEv1OfNuadlWD+yEHC4JoA35H/xjllGjotoY0QGYe4/HKu | | | y5XzP9SVa | | | 9HSQzSNWvh5cSNW0qsnkzI5tT93YPPusBmic9tcy3HttvV+/an2MVF+cUalUg | | | S+K | | | f5QRZ3YJAoGBAPZCYNNgFSo9+21ti3+Ed7DfrIiY6gLtuzj3EzdHqVhbnS3lWqnK | | | V | | | l3K00m2cIA6lbSPeCv4jrnmbP7SJDcR9bH5kiEzpzMRVjz711vdzmovihN6pdf1 | | | dJUAlAJ | | | 23GNN9Sl/lWT6PvgxKtxpZEuTW71jMtfGXh0AZR9pum+pPEX3AoGBAOB4 | | | wPgUiIwnkdO6K | | | NNa2RB76BVnxl60BgjUP8LoJLYdZF/NUyK9JmaZIm6Nm+60UewV | | | hsybrRzxumEgmz74EP+ | | | 950MgnlV0bRNruztDKBVzAEZYUAguGHYBhZvjrCgdch3K | | | r3SvtEeSrf3arzc0CkaIfnezZ | | | 8zSJGwIb9QhVvQVAoGAfG92NdkrWLkRP25HOxxY | | | zst3h16dgPVX0aUn1Jslezxv5C0s7vc | | | LRUGkGm9R0bnIxABrHOzwUgeZs/nDywAM | | | Qu51ZMmRB0EPbqljXUxrbUSwZL9o5gNl8ZTlN | | | YmSO6u84kdR2kCJaSiPG+k1gOrp | | | RH3Hxzi5fbYhYwot5xaBGmkCgYBW8I4Ux6cnWYASmCy | | | Nx8cAqmz2NW5QabtYmhRk | | | AhON8Y5ZyKWlc8s2u6LpXLGDX5XHMYM3JiiDaGB+k+ltFxlGZ | | | B/5EN4iTOieOACZ | | | cGZLAHKfzD6bHHde5TIBccnQ866qOUGBmTfi7L2074kjgWVlciJBbFw | | | MqWtv6Eif | | | yZXrDQKBgQDSJlckxUoebZZF3b5uU0hUanJYYsUjJMUBKWTbRqKL4S4Oiro5k | | | RQn | | | pNfrA/co33Ce3lmLDCjjt5cvpGXzEVRNJyqx/0stts+8Y0+Hk2MOyC5QHokhYbXK | | | z | | | CFfwFVADiMKIdA8lXNYQVpXWDYko5gG9GKIbmMik8gdurF/K9iTDg== | | | -----END RSA | | | PRIVATE KEY----- | | | ', u'ssh_virt_type': u'virsh', u'ssh_address': | | | u'192.168.122.1'} | | created_at | 2016-01-06T15:39:30+00:00 | | driver_internal_info | {} | | chassis_uuid | | | instance_info | {} | +------------------------+-------------------------------------------------------------------------+ ironic node-show 2f7be663-e37f-422f-bbd3-2046dc7117e1 +------------------------+-------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------+ | target_power_state | None | | extra | {} | | last_error | None | | updated_at | 2016-01-06T18:04:14+00:00 | | maintenance_reason | None | | provision_state | available | | clean_step | {} | | uuid | 2f7be663-e37f-422f-bbd3-2046dc7117e1 | | console_enabled | False | | target_provision_state | None | | provision_updated_at | 2016-01-06T18:04:14+00:00 | | maintenance | False | | inspection_started_at | None | | inspection_finished_at | None | | power_state | power off | | driver | pxe_ssh | | reservation | None | | properties | {u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'40', | | | u'cpus': u'1', u'capabilities': u'profile:baremetal,boot_option:local'} | | instance_uuid | None | | name | None | | driver_info | {u'ssh_username': u'root', u'deploy_kernel': u'd8605812-9e56-4e08-a0cf- | | | 198238a8cc56', u'deploy_ramdisk': u'cd726f50-5f03-4e86-833f- | | | 2be262d16bfb', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- | | | M | | | IIEpAIBAAKCAQEA1+49dsPUbV1OYVOuk7TeY7DI7f6+V+KmCDtLiExrl4l0hqFj | | | N3tib+E | | | OyNI8bTW3jrEZY5SWuE+ovlkrgww/eHRluNvAW+92sJSz3zYOkxbtxqxZ | | | kBT5pNLUmjGZE | | | AgiPYaY6QtPBgFRZEIf3wixJ2ZFMaeyMeyTbwzSfHt9jsgORRux | | | 5h6mIAlandE/uetew4T | | | s84WO17HL6jfjBMRnM3k0AufnWq69ZKYncXPGR55JzMPM | | | zEGeDf/NSsOZPjLv9phtJMet5 | | | XJoHe0ne1+DtK+Eah4ayJ0nswD1oxPVvS2TNIA4 | | | fL36fXKtvBSh97qAfFCLgLHAk8ULqmF | | | GrJspQwIDAQABAoIBAQDEmne+JinJdfZC | | | qaW/eSQvkqwv/g0jCgtwXUS1khc4f3/ogRNMV | | | fxXr4v9j2ZTbWB2/IsIXycoI+eZ | | | gKkchc9YhmqD+RnY21yvBIYNyzNT+8F9bjmR9Xcj+o5 | | | 24ofhz/YVHPMm5RQLfLG1 | | | 3nsUYN6j5/tCbdTSnfvTVdJkYy/8CkBRUYbfe61Kl2m7YDTMC | | | ZEeLPBFK5IhlaS8 | | | 4mGYXcCF7yEv1OfNuadlWD+yEHC4JoA35H/xjllGjotoY0QGYe4/HKu | | | y5XzP9SVa | | | 9HSQzSNWvh5cSNW0qsnkzI5tT93YPPusBmic9tcy3HttvV+/an2MVF+cUalUg | | | S+K | | | f5QRZ3YJAoGBAPZCYNNgFSo9+21ti3+Ed7DfrIiY6gLtuzj3EzdHqVhbnS3lWqnK | | | V | | | l3K00m2cIA6lbSPeCv4jrnmbP7SJDcR9bH5kiEzpzMRVjz711vdzmovihN6pdf1 | | | dJUAlAJ | | | 23GNN9Sl/lWT6PvgxKtxpZEuTW71jMtfGXh0AZR9pum+pPEX3AoGBAOB4 | | | wPgUiIwnkdO6K | | | NNa2RB76BVnxl60BgjUP8LoJLYdZF/NUyK9JmaZIm6Nm+60UewV | | | hsybrRzxumEgmz74EP+ | | | 950MgnlV0bRNruztDKBVzAEZYUAguGHYBhZvjrCgdch3K | | | r3SvtEeSrf3arzc0CkaIfnezZ | | | 8zSJGwIb9QhVvQVAoGAfG92NdkrWLkRP25HOxxY | | | zst3h16dgPVX0aUn1Jslezxv5C0s7vc | | | LRUGkGm9R0bnIxABrHOzwUgeZs/nDywAM | | | Qu51ZMmRB0EPbqljXUxrbUSwZL9o5gNl8ZTlN | | | YmSO6u84kdR2kCJaSiPG+k1gOrp | | | RH3Hxzi5fbYhYwot5xaBGmkCgYBW8I4Ux6cnWYASmCy | | | Nx8cAqmz2NW5QabtYmhRk | | | AhON8Y5ZyKWlc8s2u6LpXLGDX5XHMYM3JiiDaGB+k+ltFxlGZ | | | B/5EN4iTOieOACZ | | | cGZLAHKfzD6bHHde5TIBccnQ866qOUGBmTfi7L2074kjgWVlciJBbFw | | | MqWtv6Eif | | | yZXrDQKBgQDSJlckxUoebZZF3b5uU0hUanJYYsUjJMUBKWTbRqKL4S4Oiro5k | | | RQn | | | pNfrA/co33Ce3lmLDCjjt5cvpGXzEVRNJyqx/0stts+8Y0+Hk2MOyC5QHokhYbXK | | | z | | | CFfwFVADiMKIdA8lXNYQVpXWDYko5gG9GKIbmMik8gdurF/K9iTDg== | | | -----END RSA | | | PRIVATE KEY----- | | | ', u'ssh_virt_type': u'virsh', u'ssh_address': | | | u'192.168.122.1'} | | created_at | 2016-01-06T15:39:31+00:00 | | driver_internal_info | {} | | chassis_uuid | | | instance_info | {} | +------------------------+-------------------------------------------------------------------------+
Looking at the openstack-nova-compute logs, nova can't authenticate in keystone to access ironic: lo_service.periodic_task [req-5824d78b-d1fa-46b4-9a70-22048889c581 - - - - -] Error during ClusteredComputeManager.update_available_resource lo_service.periodic_task Traceback (most recent call last): lo_service.periodic_task File "/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 218, in run_periodic_tasks lo_service.periodic_task task(self, context) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6193, in update_available_resource lo_service.periodic_task nodenames = set(self.driver.get_available_nodes()) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 592, in get_available_nodes lo_service.periodic_task self._refresh_cache() lo_service.periodic_task File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 575, in _refresh_cache lo_service.periodic_task for node in self._get_node_list(detail=True, limit=0): lo_service.periodic_task File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 514, in _get_node_list lo_service.periodic_task node_list = self.ironicclient.call("node.list", **kwargs) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/nova/virt/ironic/client_wrapper.py", line 125, in call lo_service.periodic_task client = self._get_client() lo_service.periodic_task File "/usr/lib/python2.7/site-packages/nova/virt/ironic/client_wrapper.py", line 82, in _get_client lo_service.periodic_task cli = ironic.client.get_client(CONF.ironic.api_version, **kwargs) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/ironicclient/client.py", line 86, in get_client lo_service.periodic_task _ksclient = _get_ksclient(**ks_kwargs) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/ironicclient/client.py", line 35, in _get_ksclient lo_service.periodic_task insecure=kwargs.get('insecure')) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py", line 166, in __init__ lo_service.periodic_task self.authenticate() lo_service.periodic_task File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner lo_service.periodic_task return func(*args, **kwargs) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py", line 589, in authenticate lo_service.periodic_task resp = self.get_raw_token_from_identity_service(**kwargs) lo_service.periodic_task File "/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py", line 210, in get_raw_token_from_identity_service lo_service.periodic_task _("Authorization Failed: %s") % e) lo_service.periodic_task AuthorizationFailure: Authorization Failed: Unable to establish connection to http://127.0.0.1:35357/v2.0/tokens However, these credentials work in CLI, not sure what caused it..
Oh, not exactly. I didn't copy the auth URL. If I do copy it, I get: $ OS_USERNAME=ironic OS_PASSWORD=<snip> OS_TENANT_NAME=service OS_AUTH_URL=http://127.0.0.1:35357/v2.0/tokens ironic node-list Unable to establish connection to http://127.0.0.1:35357/v2.0/tokens/tokens
I think that was the wrong credentials . /home/stack/stackrc has the correct credentials export NOVA_VERSION=1.1 export OS_PASSWORD=$(sudo hiera admin_password) export OS_AUTH_URL=http://192.0.2.1:5000/v2.0 export OS_USERNAME=admin export OS_TENANT_NAME=admin export COMPUTE_API_VERSION=1.1 export OS_NO_CACHE=True export OS_CLOUDNAME=undercloud export OS_IMAGE_API_VERSION=1 and ironic node-list works
Looking at a successful deployment I see these differences Failed: nova hypervisor-list +----+---------------------+-------+--------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+--------+ +----+---------------------+-------+--------+ Successfull: nova hypervisor-list +----+--------------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+--------------------------------------+-------+---------+ | 1 | 97959b47-1016-4729-a62e-bc07a787da96 | up | enabled | | 2 | 2303f54e-c91e-4a18-845e-f7d72175bca7 | up | enabled | | 3 | ce29048e-9efe-4138-8b5d-84073d54b35b | up | enabled | | 4 | 3d63800d-3262-40e9-9f74-8221c1fc1e0d | up | enabled | +----+--------------------------------------+-------+---------+ failed: ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 5dac2173-452c-4d74-b8e5-ff0b7015af61 | None | None | power off | available | True | | 1b918b20-19ad-4c7c-8ba0-19ab1d9ccd26 | None | None | power off | available | False | | 2f7be663-e37f-422f-bbd3-2046dc7117e1 | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ successfull ironic node-list +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+ | 97959b47-1016-4729-a62e-bc07a787da96 | None | 90bd8774-45af-42aa-8801-decbc68b4632 | power on | active | False | | 3d63800d-3262-40e9-9f74-8221c1fc1e0d | None | 442d7d09-730b-4c62-bb71-7e72e8002393 | power on | active | False | | ce29048e-9efe-4138-8b5d-84073d54b35b | None | fb9880e9-dd42-42e7-99a7-74d8c81468fe | power on | active | False | | 2303f54e-c91e-4a18-845e-f7d72175bca7 | None | 317fc9ab-7036-4757-b30a-e8a6e33ebcd2 | power on | active | False | +--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+
Yeah, [ironic]os_admin_url may be wrong in nova.conf then.. could you check its value on a successful deployment? will my command line above work on it by chance?
This patch looks related: https://review.openstack.org/#/c/264492/
Any news on that? we have a gate for osp8 that can gate this change downstream
Bug reproduced yesterday openstack-tripleo-heat-templates-0.8.7-12.el7ost.noarch openstack-tripleo-puppet-elements-0.0.2-1.el7ost.noarch openstack-tripleo-common-0.1.1-1.el7ost.noarch openstack-tripleo-0.0.7-1.el7ost.noarch python-tripleoclient-0.1.1-2.el7ost.noarch openstack-tripleo-heat-templates-kilo-0.8.7-12.el7ost.noarch openstack-tripleo-image-elements-0.9.7-2.el7ost.noarch [stack@instack ~]$ rhos-release -v 1.0.35 The deployment of overcloud was attempted (using the 8.0 puddle) on 25:Feb:2016 at 00:02 Network isolation is enabled. Used IPv4. No SSL on Overcloud Deployment command: openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --ntp-server 10.5.26.10 --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e network-environment.yaml
failed in instack-undercloud-2.2.2-2.el7ost.noarch
In OSP7 have same issue. [stack@director ~]$ time openstack overcloud deploy --templates templates/openstack-tripleo-heat-templates/ -e templates/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e templates/network-environment.yaml --control-flavor control --compute-flavor compute --neutron-tunnel-types vxlan --neutron-network-type vxlan --ntp-server 118.143.17.82 Deployment failed: Expected hypervisor stats not met real 0m1.572s user 0m0.458s sys 0m0.134s [stack@director ~]$ rpm -qa | grep instack instack-0.0.7-2.el7ost.noarch instack-undercloud-2.1.2-39.el7ost.noarch
Not sure how I can help here.
The error that you listed looks unrelated to this bug. Can you try again? I think you might need a new bug if the error you had continues to show up.
Tried to reproduce this several times today on a virt setup, using the current 8 puddle and various combinations of HA/network settings/compute/ceph [stack@instack ~]$ rpm -qa |grep tripleo openstack-tripleo-0.0.7-1.el7ost.noarch openstack-tripleo-heat-templates-kilo-0.8.9-1.el7ost.noarch openstack-tripleo-image-elements-0.9.9-1.el7ost.noarch openstack-tripleo-heat-templates-0.8.9-1.el7ost.noarch openstack-tripleo-common-0.2.0-1.el7ost.noarch python-tripleoclient-0.1.1-4.el7ost.noarch openstack-tripleo-puppet-elements-0.0.3-1.el7ost.noarch [stack@instack ~]$ rpm -qa |grep instack instack-0.0.8-2.el7ost.noarch instack-undercloud-2.2.4-1.el7ost.noarch [stack@instack ~]$ rhos-release -v 1.0.35 The issue did not reproduce. Setting to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-0604.html