Description of problem: ----------------------- After FFWD upgrade of RHOS-10 composable deployment failed to spawn vm: fault: code: 500 created: '2019-01-22T23:01:37Z' details: " File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\"\ , line 1145, in schedule_and_build_instances\n instance_uuids, return_alternates=True)\n\ \ File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 742,\ \ in _schedule_instances\n return_alternates=return_alternates)\n File \"\ /usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 787, in wrapped\n\ \ return func(*args, **kwargs)\n File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\"\ , line 53, in select_destinations\n instance_uuids, return_objects, return_alternates)\n\ \ File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\"\ , line 37, in __run_method\n return getattr(self.instance, __name)(*args, **kwargs)\n\ \ File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", line\ \ 42, in select_destinations\n instance_uuids, return_objects, return_alternates)\n\ \ File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", line 158,\ \ in select_destinations\n return cctxt.call(ctxt, 'select_destinations', **msg_args)\n\ \ File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", line\ \ 174, in call\n retry=self.retry)\n File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\"\ , line 131, in _send\n timeout=timeout, retry=retry)\n File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\"\ , line 559, in send\n retry=retry)\n File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\"\ , line 550, in _send\n raise result\n" message: 'No valid host was found. ' Version-Release number of selected component (if applicable): ------------------------------------------------------------- openstack-nova-common-17.0.7-5.el7ost.noarch openstack-nova-novncproxy-17.0.7-5.el7ost.noarch openstack-nova-console-17.0.7-5.el7ost.noarch puppet-nova-12.4.0-14.el7ost.noarch openstack-nova-migration-17.0.7-5.el7ost.noarch openstack-nova-compute-17.0.7-5.el7ost.noarch openstack-nova-scheduler-17.0.7-5.el7ost.noarch python2-novaclient-10.1.0-1.el7ost.noarch openstack-nova-conductor-17.0.7-5.el7ost.noarch python-nova-17.0.7-5.el7ost.noarch openstack-nova-api-17.0.7-5.el7ost.noarch Containers with tag 2019-01-21.1 Steps to Reproduce: ------------------- 1. Perform FFWD upgrade of RHOS-10 (3controllers + 3serviceapi + 2computes + 2networkers + 3ceph) 2. Try to spawn VM after FFWD process Actual results: --------------- Failed to spawn VM Expected results: ----------------- VM is spawned after FFWD
On compute node next error in nova-compute.log 2019-01-23 07:24:54.876 1 DEBUG oslo_concurrency.lockutils [req-d10628cc-ac07-4d45-b109-b27e46a58c26 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 2019-01-23 07:24:55.033 1 DEBUG oslo_concurrency.lockutils [req-d10628cc-ac07-4d45-b109-b27e46a58c26 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.157s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager [req-d10628cc-ac07-4d45-b109-b27e46a58c26 - - - - -] Error updating resources for node compute-0.localdomain.: ResourceProviderCreationFailed: Failed to create resource provider compute-0.localdomain 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager Traceback (most recent call last): 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7379, in update_available_resource_for_node 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 689, in update_available_resource 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager self._update_available_resource(context, resources) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager return f(*args, **kwargs) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 713, in _update_available_resource 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager self._init_compute_node(context, resources) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 562, in _init_compute_node 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager self._update(context, cn) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 887, in _update 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager inv_data, 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 68, in set_inventory_for_provider 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid, 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 1104, in set_inventory_for_provider 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 673, in _ensure_resource_provider 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager name=name or uuid) 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager ResourceProviderCreationFailed: Failed to create resource provider compute-0.localdomain 2019-01-23 07:24:55.034 1 ERROR nova.compute.manager
the issue is that during the upgrade the placement user, service and endpoints were not registered in keystone. After adding the user, assign role, create service and the endpoints: openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description "OpenStack Placement Service" placement openstack endpoint create --region regionOne placement admin http://172.17.1.13:8778/placement openstack endpoint create --region regionOne placement public https://10.0.0.101:13778/placement openstack endpoint create --region regionOne placement internal http://172.17.1.13:8778/placement also on computes restarted compute to get the resource provider registered in placement docker restart nova_compute Instance could be created successfully: openstack server create --flavor workload_flavor_1 --image workload_image_1 --nic net-id=200a3178-ce7a-4459-b554-82791d3a1f77 test +--------------------------------------+---------------------+--------+------------+-------------+-------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------------------+--------+------------+-------------+-------------------------------------------------+ | de06bc20-f61e-4927-8fe2-6009d20ad524 | test | ACTIVE | - | Running | workload_internal_net_1=192.168.0.21 | | cd85ffe5-42f7-4f72-9257-68da3732e19b | workload_instance_0 | ACTIVE | - | Running | workload_internal_net_0=192.168.0.6, 10.0.0.215 | | 3c74b26b-b591-485b-b887-6b03fc2bf120 | workload_instance_1 | ERROR | - | NOSTATE | | +--------------------------------------+---------------------+--------+------------+-------------+-------------------------------------------------+
This is same issue with docker_puppet_tasks *** This bug has been marked as a duplicate of bug 1626140 ***