Bug 1661804 - 'nova stop' commands to not work in upgraded environment (rhos13-rhos14).
Summary: 'nova stop' commands to not work in upgraded environment (rhos13-rhos14).
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 14.0 (Rocky)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: z1
: 14.0 (Rocky)
Assignee: Martin Schuppert
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On: 1662575 1663903 1671861
Blocks: 1664710 1665032 1684663
TreeView+ depends on / blocked
 
Reported: 2018-12-23 13:29 UTC by Mike Abrams
Modified: 2023-03-21 19:12 UTC (History)
22 users (show)

Fixed In Version: openstack-nova-18.0.3-0.20181011032840.d1243fe.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1664710 1665032 (view as bug list)
Environment:
Last Closed: 2019-03-18 13:02:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
sosreport (3.16 KB, application/x-xz)
2018-12-23 13:35 UTC, Mike Abrams
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1798172 0 None None None 2019-01-23 16:46:00 UTC
OpenStack gerrit 611337 0 None MERGED Ignore uuid if already set in ComputeNode.update_from_virt_driver 2020-10-27 21:41:43 UTC
OpenStack gerrit 629451 0 None MERGED Fix updating nodes with removed or broken drivers 2020-10-27 21:41:57 UTC
Red Hat Issue Tracker OSP-23416 0 None None None 2023-03-21 19:12:44 UTC
Red Hat Product Errata RHBA-2019:0592 0 None None None 2019-03-18 13:02:21 UTC
Storyboard 2004741 0 None None None 2019-01-09 12:12:56 UTC

Comment 2 Mike Abrams 2018-12-23 13:35:36 UTC
Created attachment 1516363 [details]
sosreport

Comment 9 Iury Gregory Melo Ferreira 2019-01-09 12:45:10 UTC
*** Bug 1652998 has been marked as a duplicate of this bug. ***

Comment 17 Alexander Chuzhoy 2019-01-21 17:52:35 UTC
Environment:
openstack-ironic-common-11.1.1-0.20181012152843.el7ost.noarch
openstack-ironic-conductor-11.1.1-0.20181012152843.el7ost.noarch
openstack-ironic-staging-drivers-0.10.1-0.20180820161038.39c4e93.el7ost.noarch

Switching the driver works:

(undercloud) [stack@undercloud-0 ~]$ openstack baremetal node show controller-0 -f value -c driver
pxe_ipmitool
(undercloud) [stack@undercloud-0 ~]$ openstack baremetal node set --driver ipmi controller-0
(undercloud) [stack@undercloud-0 ~]$ openstack baremetal node show controller-0 -f value -c driver
ipmi
(undercloud) [stack@undercloud-0 ~]$ 


Actually stopping a server doesn't:

(undercloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+--------------+--------+------------------------+---------------------------------+------------+
| ID                                   | Name         | Status | Networks               | Image                           | Flavor     |
+--------------------------------------+--------------+--------+------------------------+---------------------------------+------------+
| b0a33693-1784-4b95-b97d-47797399b127 | ceph-1       | ACTIVE | ctlplane=192.168.24.13 | overcloud-full_20190119T154245Z | ceph       |
| 52c75240-799d-44d8-954e-1c6bdf6b5460 | controller-1 | ACTIVE | ctlplane=192.168.24.6  | overcloud-full_20190119T154245Z | controller |
| ffbe06ef-fa6d-491c-863c-fbfe0916dc25 | controller-0 | ACTIVE | ctlplane=192.168.24.9  | overcloud-full_20190119T154245Z | controller |
| 7c3fb5c7-ef7d-4ac8-bd08-d4a0e0ecefaa | controller-2 | ACTIVE | ctlplane=192.168.24.8  | overcloud-full_20190119T154245Z | controller |
| 9f8fa950-0f26-46f2-b5e4-a2ec1a9208bc | ceph-0       | ACTIVE | ctlplane=192.168.24.15 | overcloud-full_20190119T154245Z | ceph       |
| 89d31d20-851c-4481-8cd7-ce66c5bfb5e1 | ceph-2       | ACTIVE | ctlplane=192.168.24.21 | overcloud-full_20190119T154245Z | ceph       |
| 1156cb19-2cd9-4665-a7fb-398d0aed900d | compute-0    | ACTIVE | ctlplane=192.168.24.17 | overcloud-full_20190119T154245Z | compute    |
+--------------------------------------+--------------+--------+------------------------+---------------------------------+------------+


(undercloud) [stack@undercloud-0 ~]$ nova stop controller-0

Request to stop server controller-0 has been accepted.

Despite beeing accepted - the servers are still active:
(undercloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+--------------+--------+--------------+-------------+------------------------+
| ID                                   | Name         | Status | Task State   | Power State | Networks               |
+--------------------------------------+--------------+--------+--------------+-------------+------------------------+
| 9f8fa950-0f26-46f2-b5e4-a2ec1a9208bc | ceph-0       | ACTIVE | -            | Running     | ctlplane=192.168.24.15 |
| b0a33693-1784-4b95-b97d-47797399b127 | ceph-1       | ACTIVE | powering-off | Running     | ctlplane=192.168.24.13 |
| 89d31d20-851c-4481-8cd7-ce66c5bfb5e1 | ceph-2       | ACTIVE | -            | Running     | ctlplane=192.168.24.21 |
| 1156cb19-2cd9-4665-a7fb-398d0aed900d | compute-0    | ACTIVE | powering-off | Running     | ctlplane=192.168.24.17 |
| ffbe06ef-fa6d-491c-863c-fbfe0916dc25 | controller-0 | ACTIVE | powering-off | Running     | ctlplane=192.168.24.9  |
| 52c75240-799d-44d8-954e-1c6bdf6b5460 | controller-1 | ACTIVE | -            | Running     | ctlplane=192.168.24.6  |
| 7c3fb5c7-ef7d-4ac8-bd08-d4a0e0ecefaa | controller-2 | ACTIVE | -            | Running     | ctlplane=192.168.24.8  |
+--------------------------------------+--------------+--------+--------------+-------------+------------------------+
(undercloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+--------------+--------+------------------------+---------------------------------+------------+
| ID                                   | Name         | Status | Networks               | Image                           | Flavor     |
+--------------------------------------+--------------+--------+------------------------+---------------------------------+------------+
| b0a33693-1784-4b95-b97d-47797399b127 | ceph-1       | ACTIVE | ctlplane=192.168.24.13 | overcloud-full_20190119T154245Z | ceph       |
| 52c75240-799d-44d8-954e-1c6bdf6b5460 | controller-1 | ACTIVE | ctlplane=192.168.24.6  | overcloud-full_20190119T154245Z | controller |
| ffbe06ef-fa6d-491c-863c-fbfe0916dc25 | controller-0 | ACTIVE | ctlplane=192.168.24.9  | overcloud-full_20190119T154245Z | controller |
| 7c3fb5c7-ef7d-4ac8-bd08-d4a0e0ecefaa | controller-2 | ACTIVE | ctlplane=192.168.24.8  | overcloud-full_20190119T154245Z | controller |
| 9f8fa950-0f26-46f2-b5e4-a2ec1a9208bc | ceph-0       | ACTIVE | ctlplane=192.168.24.15 | overcloud-full_20190119T154245Z | ceph       |
| 89d31d20-851c-4481-8cd7-ce66c5bfb5e1 | ceph-2       | ACTIVE | ctlplane=192.168.24.21 | overcloud-full_20190119T154245Z | ceph       |
| 1156cb19-2cd9-4665-a7fb-398d0aed900d | compute-0    | ACTIVE | ctlplane=192.168.24.17 | overcloud-full_20190119T154245Z | compute    |
+--------------------------------------+--------------+--------+------------------------+---------------------------------+------------+






2019-01-21 12:51:57.564 1 ERROR nova.compute.manager Traceback (most recent call last):
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7745, in _update_available_resource_for_node
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     rt.update_available_resource(context, nodename)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 724, in update_available_resource
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     self._update_available_resource(context, resources)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     return f(*args, **kwargs)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 747, in _update_available_resource
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     self._init_compute_node(context, resources)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 562, in _init_compute_node
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     self._copy_resources(cn, resources)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 649, in _copy_resources
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     compute_node.update_from_virt_driver(resources)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 354, in update_from_virt_driver
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     setattr(self, key, resources[key])
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 77, in setter
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager     raise exception.ReadOnlyFieldError(field=name)
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager ReadOnlyFieldError: Cannot modify readonly field uuid
2019-01-21 12:51:57.564 1 ERROR nova.compute.manager

Comment 18 Dmitry Tantsur 2019-01-21 18:19:59 UTC
Let us maybe clone the issue to avoid piling too much into one bug? This one clearly comes from Nova, I'm not even sure why it happens, we'll need the Compute team to look.

Comment 19 Alexander Chuzhoy 2019-01-21 21:15:30 UTC
I suggest instead re-opening https://bugzilla.redhat.com/show_bug.cgi?id=1652998 (marked as duplicate of this one)   and verify it with the bits in comment #17

Then we can just switch this bug to nova. Agreed?

Comment 20 Dmitry Tantsur 2019-01-22 09:23:41 UTC
Works for me. The only inconvenience is that https://bugzilla.redhat.com/show_bug.cgi?id=1665032 was opened as a clone of this one for 13.

Comment 25 Martin Schuppert 2019-01-23 16:46:00 UTC
First of all the env has the same upgrade config issue as we had in comment5. During the upgrade the CloudDomain was not set and we see new nova service registered with .localdomain.

(undercloud) [stack@undercloud-0 ~]$ openstack server list --long                                                                                                                                                                                                                         
+--------------------------------------+--------------+--------+--------------+-------------+------------------------+----------------+--------------------------------------+-------------+--------------------------------------+-------------------+---------------------------+------------+
| ID                                   | Name         | Status | Task State   | Power State | Networks               | Image Name     | Image ID                             | Flavor Name | Flavor ID                            | Availability Zone | Host                      | Properties |
+--------------------------------------+--------------+--------+--------------+-------------+------------------------+----------------+--------------------------------------+-------------+--------------------------------------+-------------------+---------------------------+------------+
| 32e6c748-3ec7-47bb-8136-e9d85cbabf7b | compute-0    | ACTIVE | powering-off | Running     | ctlplane=192.168.24.7  | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | compute     | 8f7ef499-f98f-4225-9086-a58792c77c0e | nova              | undercloud-0.redhat.local |            |
| e824742a-590b-4152-9118-be9a1480035e | controller-0 | ACTIVE | None         | Running     | ctlplane=192.168.24.14 | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | controller  | 57b21c4b-7338-47fd-a3f6-e83c5f228320 | nova              | undercloud-0.redhat.local |            |
| b352a704-4410-419a-9eee-2e9416ade2c2 | ceph-0       | ACTIVE | None         | Running     | ctlplane=192.168.24.10 | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | ceph        | e0b9842a-08e2-4dcb-9df2-c3c7c01267b1 | nova              | undercloud-0.redhat.local |            |
| 9913a31d-1561-4b15-a70a-46e83a1ec4ad | controller-1 | ACTIVE | None         | Running     | ctlplane=192.168.24.9  | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | controller  | 57b21c4b-7338-47fd-a3f6-e83c5f228320 | nova              | undercloud-0.redhat.local |            |
| d6b15f79-6974-4b8b-bed9-0cfdb790f1ad | controller-2 | ACTIVE | None         | Running     | ctlplane=192.168.24.11 | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | controller  | 57b21c4b-7338-47fd-a3f6-e83c5f228320 | nova              | undercloud-0.redhat.local |            |
| f8b38522-8594-4a74-b4a0-cdac1cd1eced | ceph-1       | ACTIVE | None         | Running     | ctlplane=192.168.24.13 | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | ceph        | e0b9842a-08e2-4dcb-9df2-c3c7c01267b1 | nova              | undercloud-0.redhat.local |            |
| 45c15aad-fc35-4275-9714-d3091966e948 | ceph-2       | ACTIVE | None         | Running     | ctlplane=192.168.24.8  | overcloud-full | 1940b8bd-dfb7-4fec-b6d6-b262e4254c06 | ceph        | e0b9842a-08e2-4dcb-9df2-c3c7c01267b1 | nova              | undercloud-0.redhat.local |            |
+--------------------------------------+--------------+--------+--------------+-------------+------------------------+----------------+--------------------------------------+-------------+--------------------------------------+-------------------+---------------------------+------------+
 
(undercloud) [stack@undercloud-0 ~]$ nova service-list
+--------------------------------------+----------------+---------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id                                   | Binary         | Host                      | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
+--------------------------------------+----------------+---------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 5158912f-b901-4df3-bcf1-3e9ab4494b11 | nova-conductor | undercloud-0.redhat.local | internal | enabled | down  | 2019-01-22T21:39:21.000000 | -               | False       |
| 15df185d-7f65-4d93-bbbe-5ccd510f2026 | nova-scheduler | undercloud-0.redhat.local | internal | enabled | down  | 2019-01-22T21:39:16.000000 | -               | False       |
| 5c131b49-3c73-412b-a907-2dfafb9eefe6 | nova-compute   | undercloud-0.redhat.local | nova     | enabled | down  | 2019-01-22T21:39:18.000000 | -               | False       |
| ca4651ed-9026-44a1-9496-b0cd4a79b45c | nova-scheduler | undercloud-0.localdomain  | internal | enabled | up    | 2019-01-23T15:12:56.000000 | -               | False       |
| cbf3b56f-a54f-4af6-960c-a222acf362ca | nova-conductor | undercloud-0.localdomain  | internal | enabled | up    | 2019-01-23T15:12:55.000000 | -               | False       |
| 204f4808-3b9a-467a-b929-665606b03d4b | nova-compute   | undercloud-0.localdomain  | nova     | enabled | up    | 2019-01-23T15:12:47.000000 | -               | False       |
+--------------------------------------+----------------+---------------------------+----------+---------+-------+----------------------------+-----------------+-------------+

(undercloud) [stack@undercloud-0 ~]$ nova show 32e6c748-3ec7-47bb-8136-e9d85cbabf7b |grep host
...
| OS-EXT-SRV-ATTR:host                 | undercloud-0.redhat.local

=> we can only manage the instances with the .redhat.local services!

We have docs BZ1664710 covering the missing upgrade information.


Fixed the env with:

 $ cat undercloud.conf 
~~~
[DEFAULT]
...
overcloud_domain_name = redhat.local
custom_env_files = /home/stack/undercloud_domain.yaml
~~~
 
$ cat /home/stack/undercloud_domain.yaml
parameter_defaults:
    CloudDomain: redhat.local
 
openstack undercloud install --no-validations
 
(undercloud) [stack@undercloud-0 ~]$ nova service-list
+--------------------------------------+----------------+---------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id                                   | Binary         | Host                      | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
+--------------------------------------+----------------+---------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 5158912f-b901-4df3-bcf1-3e9ab4494b11 | nova-conductor | undercloud-0.redhat.local | internal | enabled | up    | 2019-01-23T15:48:28.000000 | -               | False       |
| 15df185d-7f65-4d93-bbbe-5ccd510f2026 | nova-scheduler | undercloud-0.redhat.local | internal | enabled | up    | 2019-01-23T15:48:32.000000 | -               | False       |
| 5c131b49-3c73-412b-a907-2dfafb9eefe6 | nova-compute   | undercloud-0.redhat.local | nova     | enabled | up    | 2019-01-23T15:48:25.000000 | -               | False       |
| ca4651ed-9026-44a1-9496-b0cd4a79b45c | nova-scheduler | undercloud-0.localdomain  | internal | enabled | down  | 2019-01-23T15:40:57.000000 | -               | False       |
| cbf3b56f-a54f-4af6-960c-a222acf362ca | nova-conductor | undercloud-0.localdomain  | internal | enabled | down  | 2019-01-23T15:40:55.000000 | -               | False       |
| 204f4808-3b9a-467a-b929-665606b03d4b | nova-compute   | undercloud-0.localdomain  | nova     | enabled | down  | 2019-01-23T15:41:47.000000 | -               | False       |
+--------------------------------------+----------------+---------------------------+----------+---------+-------+----------------------------+-----------------+-------------+

update the transport_url in the nova_api db
(undercloud) [stack@undercloud-0 ~]$ docker exec -it -u root nova_api /bin/bash
()[root@undercloud-0 /]# nova-manage cell_v2 update_cell --cell_uuid 47831a44-5375-44c4-b49a-e6747c036a79 --name default --transport-url 'rabbit://0911cf9b478496df77a564b87778a8ba647b96fe:0da1f091d07eae06143517e88b9b1ea31c8d9701.redhat.local:5672/?ssl=0'


restart nova services
docker restart nova_api
docker restart nova_conductor
docker restart nova_compute
docker restart nova_scheduler
docker restart nova_placement


After this nova-compute had the same errors, but it was required to fix the env firts. Actually the issue is [1] and fixed in rocky with [2], which we at the moment have not in OSP14.


After applying the patch [2] and restart nova_compute container:

(undercloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+--------------+---------+------------+-------------+------------------------+
| ID                                   | Name         | Status  | Task State | Power State | Networks               |
+--------------------------------------+--------------+---------+------------+-------------+------------------------+
| b352a704-4410-419a-9eee-2e9416ade2c2 | ceph-0       | ACTIVE  | -          | Running     | ctlplane=192.168.24.10 |
| f8b38522-8594-4a74-b4a0-cdac1cd1eced | ceph-1       | ACTIVE  | -          | Running     | ctlplane=192.168.24.13 |
| 45c15aad-fc35-4275-9714-d3091966e948 | ceph-2       | ACTIVE  | -          | Running     | ctlplane=192.168.24.8  |
| 32e6c748-3ec7-47bb-8136-e9d85cbabf7b | compute-0    | SHUTOFF | -          | Shutdown    | ctlplane=192.168.24.7  |
| e824742a-590b-4152-9118-be9a1480035e | controller-0 | ACTIVE  | -          | Running     | ctlplane=192.168.24.14 |
| 9913a31d-1561-4b15-a70a-46e83a1ec4ad | controller-1 | ACTIVE  | -          | Running     | ctlplane=192.168.24.9  |
| d6b15f79-6974-4b8b-bed9-0cfdb790f1ad | controller-2 | ACTIVE  | -          | Running     | ctlplane=192.168.24.11 |
+--------------------------------------+--------------+---------+------------+-------------+------------------------+

We can also see the shutoff in the nova-compute log:
2019-01-23 11:15:36.193 1 DEBUG nova.network.base_api [req-9db010bc-2667-47b8-a803-28de142dbfbc - - - - -] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Updating instance_info_cache with network_info: [{"profile": {}, "ovs_interfaceid": null, "preserve_on_delete": true, "network": {"bridge": null, "subnets": [{"ips": [{"meta": {}, "version": 4, "type": "fixe
d", "floating_ips": [], "address": "192.168.24.7"}], "version": 4, "meta": {"dhcp_server": "192.168.24.5"}, "dns": [], "routes": [{"interface": null, "cidr": "169.254.169.254/32", "meta": {}, "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "192.168.24.3"}}], "cidr": "192.168.24.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.24.1"}}], "meta": {"injected": false, "tunneled": false, "tenant_id": "d2f53bbf328043268217c21b29445a54", "physical_network": "ctlplane", "mtu": 1500}, "id": "4bb1a587-8720-4e19-aa55-c1ee5bc785bf", "label": "ctlplane"}, "devname": "taped9780e3-0c", "vnic_type": "baremetal", "qbh_params": null, "meta": {}, "details": {}, "address": "52:54:00:
c3:02:21", "active": true, "type": "other", "id": "ed9780e3-0cec-425c-b67b-96fbb07fe5db", "qbg_params": null}] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48
2019-01-23 11:15:36.210 1 DEBUG oslo_concurrency.lockutils [req-9db010bc-2667-47b8-a803-28de142dbfbc - - - - -] Releasing semaphore "refresh_cache-32e6c748-3ec7-47bb-8136-e9d85cbabf7b" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
2019-01-23 11:15:36.228 1 DEBUG nova.virt.ironic.driver [req-9db010bc-2667-47b8-a803-28de142dbfbc - - - - -] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Power on called for instance power_on /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:1377
2019-01-23 11:15:36.411 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:38.417 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:40.417 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:42.411 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:44.415 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:46.439 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:48.424 1 DEBUG nova.virt.ironic.driver [-] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Still waiting for ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0 to power on: power_state="power off", target_power_state="power on", provision_state="active", target_provision_state=None _log_ironic_polling /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:131
2019-01-23 11:15:50.428 1 INFO nova.virt.ironic.driver [req-9db010bc-2667-47b8-a803-28de142dbfbc - - - - -] [instance: 32e6c748-3ec7-47bb-8136-e9d85cbabf7b] Successfully powered on Ironic node d1b4c28b-248c-4b7f-b461-d354dc66c5c0

(undercloud) [stack@undercloud-0 ~]$ nova start 32e6c748-3ec7-47bb-8136-e9d85cbabf7b
Request to start server 32e6c748-3ec7-47bb-8136-e9d85cbabf7b has been accepted.
 
(undercloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks               |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| b352a704-4410-419a-9eee-2e9416ade2c2 | ceph-0       | ACTIVE | -          | Running     | ctlplane=192.168.24.10 |
| f8b38522-8594-4a74-b4a0-cdac1cd1eced | ceph-1       | ACTIVE | -          | Running     | ctlplane=192.168.24.13 |
| 45c15aad-fc35-4275-9714-d3091966e948 | ceph-2       | ACTIVE | -          | Running     | ctlplane=192.168.24.8  |
| 32e6c748-3ec7-47bb-8136-e9d85cbabf7b | compute-0    | ACTIVE | -          | Running     | ctlplane=192.168.24.7  |
| e824742a-590b-4152-9118-be9a1480035e | controller-0 | ACTIVE | -          | Running     | ctlplane=192.168.24.14 |
| 9913a31d-1561-4b15-a70a-46e83a1ec4ad | controller-1 | ACTIVE | -          | Running     | ctlplane=192.168.24.9  |
| d6b15f79-6974-4b8b-bed9-0cfdb790f1ad | controller-2 | ACTIVE | -          | Running     | ctlplane=192.168.24.11 |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+


[1] https://bugs.launchpad.net/nova/+bug/1798172
[2] https://review.openstack.org/#/c/611337/

Comment 34 errata-xmlrpc 2019-03-18 13:02:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0592


Note You need to log in before you can comment on or make changes to this bug.