Bug 1576953 - FFU: undercloud nova hypervisor-list reports duplicate entries after FFU upgrade
Summary: FFU: undercloud nova hypervisor-list reports duplicate entries after FFU upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: instack-undercloud
Version: 11.0 (Ocata)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: async
: 11.0 (Ocata)
Assignee: Ollie Walsh
QA Contact: Marius Cornea
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-10 20:10 UTC by Marius Cornea
Modified: 2023-02-22 23:02 UTC (History)
19 users (show)

Fixed In Version: instack-undercloud-6.1.6-2.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-27 20:16:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
sosreport (14.28 MB, application/x-xz)
2018-05-10 20:13 UTC, Marius Cornea
no flags Details
undercloud-sosreport-duplicate-hypervisors (17.07 MB, application/x-xz)
2018-06-21 20:34 UTC, Chris Janiszewski
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1773398 0 None None None 2018-05-25 16:03:44 UTC
OpenStack gerrit 570081 0 None MERGED Stop creating duplicate cell_v2 cells in N->O upgrade 2020-08-28 06:33:02 UTC
Red Hat Issue Tracker OSP-11388 0 None None None 2021-12-10 16:12:20 UTC
Red Hat Product Errata RHBA-2018:2099 0 None None None 2018-06-27 20:16:37 UTC

Description Marius Cornea 2018-05-10 20:10:16 UTC
Description of problem:
FFU: undercloud nova hypervisor-list reports duplicate entries after FFU upgrade:

We can see that we have 8 registered ironic nodes but nova hypervisor-list reports the nodes twice:

(undercloud) [stack@undercloud-0 ~]$ nova hypervisor-list 
+--------------------------------------+--------------------------------------+-------+---------+
| ID                                   | Hypervisor hostname                  | State | Status  |
+--------------------------------------+--------------------------------------+-------+---------+
| 37dca627-3c71-4872-bea0-7e670366f9f6 | b1f034a2-f4ba-4d4f-90a9-8526fee3b836 | up    | enabled |
| df22d9bd-ff4f-4957-93f6-bfbb83f901e9 | ebd418f7-0282-409c-9535-fc224fe1d6db | up    | enabled |
| 30c65cdb-b270-43d3-b284-c139609e36b6 | 7ad52a39-719e-4d88-bdb1-5e1464e3f2d4 | up    | enabled |
| d47be4ab-9531-4443-8f22-d291b908b975 | c8dea625-e48d-496b-97b6-405735cb9a23 | up    | enabled |
| 2807f816-b48a-441d-8c0a-6d6080463758 | 8549c1d0-e1b5-4fb6-90a7-63c5b6e1d2aa | up    | enabled |
| 5245cc80-bee9-40e4-ad3f-ed1345e77f19 | 5c4f6ecd-576a-4127-9147-48363463a0a2 | up    | enabled |
| c4f74b3c-b06b-419b-81ea-1b702f205e08 | 0c520265-b26c-4333-8782-ffb084c7be3d | up    | enabled |
| 526cbe0b-b4a9-4feb-87a2-78fc9a7a15cb | de6b2f8a-e632-4bc7-83a9-51439af90b9d | up    | enabled |
| 37dca627-3c71-4872-bea0-7e670366f9f6 | b1f034a2-f4ba-4d4f-90a9-8526fee3b836 | up    | enabled |
| df22d9bd-ff4f-4957-93f6-bfbb83f901e9 | ebd418f7-0282-409c-9535-fc224fe1d6db | up    | enabled |
| 30c65cdb-b270-43d3-b284-c139609e36b6 | 7ad52a39-719e-4d88-bdb1-5e1464e3f2d4 | up    | enabled |
| d47be4ab-9531-4443-8f22-d291b908b975 | c8dea625-e48d-496b-97b6-405735cb9a23 | up    | enabled |
| 2807f816-b48a-441d-8c0a-6d6080463758 | 8549c1d0-e1b5-4fb6-90a7-63c5b6e1d2aa | up    | enabled |
| 5245cc80-bee9-40e4-ad3f-ed1345e77f19 | 5c4f6ecd-576a-4127-9147-48363463a0a2 | up    | enabled |
| c4f74b3c-b06b-419b-81ea-1b702f205e08 | 0c520265-b26c-4333-8782-ffb084c7be3d | up    | enabled |
| 526cbe0b-b4a9-4feb-87a2-78fc9a7a15cb | de6b2f8a-e632-4bc7-83a9-51439af90b9d | up    | enabled |
+--------------------------------------+--------------------------------------+-------+---------+
(undercloud) [stack@undercloud-0 ~]$ ironic node-list
The "ironic" CLI is deprecated and will be removed in the S* release. Please use the "openstack baremetal" CLI instead.
+--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Name         | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+
| 8549c1d0-e1b5-4fb6-90a7-63c5b6e1d2aa | ceph-0       | d95b499f-314c-4353-b0eb-fbafb703f117 | power on    | active             | False       |
| 0c520265-b26c-4333-8782-ffb084c7be3d | ceph-1       | dd4a7086-ba57-47db-b919-3aae0d8fcad7 | power on    | active             | False       |
| 5c4f6ecd-576a-4127-9147-48363463a0a2 | ceph-2       | 2be96a4d-e7dc-48b4-a230-9b96116bf23d | power on    | active             | False       |
| 7ad52a39-719e-4d88-bdb1-5e1464e3f2d4 | compute-0    | a1e731f1-336b-4e6a-9aa2-b64bfc19eea3 | power on    | active             | False       |
| b1f034a2-f4ba-4d4f-90a9-8526fee3b836 | controller-0 | 81919652-a06f-43d0-aa88-2a01dc271d93 | power on    | active             | False       |
| c8dea625-e48d-496b-97b6-405735cb9a23 | controller-1 | 35879b57-3140-4d76-85df-c451b80bdc16 | power on    | active             | False       |
| ebd418f7-0282-409c-9535-fc224fe1d6db | controller-2 | ae78f2ed-6200-41c1-85f0-61bafdc6b46e | power on    | active             | False       |
| de6b2f8a-e632-4bc7-83a9-51439af90b9d | compute-3    | None                                 | power on    | manageable         | False       |
+--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+


Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Deploy OSP10 with 3 controller + 2 computes + 3 ceph OSD nodes
2. Upgrade to OSP13 via fast forward procedure
3. Check nova hypervisor-list

Actual results:
Duplicated entries are reported.

Expected results:
There should be no duplicted entries.

Additional info:

Comment 1 Marius Cornea 2018-05-10 20:13:20 UTC
Created attachment 1434549 [details]
sosreport

Comment 2 Jose Luis Franco 2018-05-11 09:01:59 UTC
Hey @Marius,

I was wondering if you still had the environment available to check if the ironic services are duplicated. I realized that none of the tht ironic-* templates has included the fast_forward_upgrade_tasks: 
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/ironic-conductor.yaml#L231
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/ironic-api.yaml#L194
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/ironic-inspector.yaml#L214

So, it looks to me that we're not stopping the baremetal service. Could you please check if after the ffwd upgrade, we don't have the openstack-ironic-* services running as well as its corresponding container?

Thanks

Comment 3 Jose Luis Franco 2018-05-11 15:34:26 UTC
Maybe somebody from the DFG:Compute could have a look at it, I checked the nova-compute logs, and this is what it's being logged during the command execution:

http://pastebin.test.redhat.com/589642

We can see that this log is being printed twice per UUID, having a different age value each:

018-05-11 11:12:17.110 24769 DEBUG nova.virt.ironic.driver [req-0b49c12d-a247-430e-9167-b6981edcaf41 - - - - -] Using cache for node 9355e73a-803b-4dec-a2b9-dff3db6f0478, age: 0.0254299640656 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:855
2018-05-11 11:12:17.147 24769 DEBUG nova.virt.ironic.driver [req-0b49c12d-a247-430e-9167-b6981edcaf41 - - - - -] Using cache for node 9355e73a-803b-4dec-a2b9-dff3db6f0478, age: 0.0624508857727 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:855

Also, went to check the nova_api-> resource_providers in the DB, and they don't appear duplicated:

MariaDB [nova_api]> select * from resource_providers;
+---------------------+---------------------+----+--------------------------------------+--------------------------------------+------------+----------+------------------+--------------------+
| created_at          | updated_at          | id | uuid                                 | name                                 | generation | can_host | root_provider_id | parent_provider_id |
+---------------------+---------------------+----+--------------------------------------+--------------------------------------+------------+----------+------------------+--------------------+
| 2018-05-11 05:32:39 | 2018-05-11 05:56:14 |  1 | 5a3d81e5-6f13-48ee-8b6f-68fd6c4f2e85 | 9355e73a-803b-4dec-a2b9-dff3db6f0478 |          2 |        0 |                1 |               NULL |
| 2018-05-11 05:32:40 | 2018-05-11 05:56:14 |  2 | 344bff12-11c7-4347-b8f9-48db90da30a7 | fd9f10b9-67fc-4711-937a-ee902a88f028 |          2 |        0 |                2 |               NULL |
| 2018-05-11 05:32:40 | 2018-05-11 05:56:15 |  3 | acdad034-e524-4391-b3a2-dccc9c32d69d | e3d43f9d-978d-4d00-908d-e500f98b6a82 |          2 |        0 |                3 |               NULL |
| 2018-05-11 05:32:40 | 2018-05-11 05:56:15 |  4 | 42f0c640-522f-4f03-beb8-fad721699f10 | 25404d59-a9ad-460e-913b-d7f51e350d1c |          2 |        0 |                4 |               NULL |
| 2018-05-11 05:32:41 | 2018-05-11 05:56:15 |  5 | caa80dce-d49e-4e44-9a8f-416346687c3d | f34533f7-415a-42b7-8c0f-3fa70ae1c6c1 |          2 |        0 |                5 |               NULL |
| 2018-05-11 05:32:41 | 2018-05-11 05:56:16 |  6 | a4aa845b-fbd1-4140-a43b-3734d94823fe | 2665b86b-fb0c-4e8c-b2ad-c3eb25719fe2 |          2 |        0 |                6 |               NULL |
| 2018-05-11 05:32:41 | 2018-05-11 05:56:16 |  7 | 4e0bf6c4-aadd-4ce9-9f7b-168a8105518d | b0fd8e4f-8960-47ac-95c3-1e505a6d4e55 |          2 |        0 |                7 |               NULL |
| 2018-05-11 05:32:41 | 2018-05-11 05:56:16 |  8 | dd4bff90-d3a0-48c8-a83e-c82b90d080db | 44821d69-8386-470a-8d42-2c3dfb011760 |          2 |        0 |                8 |               NULL |
+---------------------+---------------------+----+--------------------------------------+--------------------------------------+------------+----------+------------------+--------------------+
8 rows in set (0.00 sec)

Comment 4 Bob Fournier 2018-05-11 17:09:30 UTC
Including DFG:Compute per comment 3.

Comment 5 Jose Luis Franco 2018-05-14 13:07:21 UTC
By the way, I just realized that the same issue as explained in https://bugzilla.redhat.com/show_bug.cgi?id=1573290 is happening in the impacted environment too (it could be related). Nova services are also appearing duplicated:

(undercloud) [stack@undercloud-0 ~]$ openstack compute service list
+----+----------------+---------------------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host                      | Zone     | Status  | State | Updated At                 |
+----+----------------+---------------------------+----------+---------+-------+----------------------------+
|  1 | nova-cert      | undercloud-0.redhat.local | internal | enabled | down  | 2018-05-14T01:03:38.000000 |
|  2 | nova-scheduler | undercloud-0.redhat.local | internal | enabled | up    | 2018-05-14T12:54:27.000000 |
|  3 | nova-conductor | undercloud-0.redhat.local | internal | enabled | up    | 2018-05-14T12:54:36.000000 |
|  5 | nova-compute   | undercloud-0.redhat.local | nova     | enabled | up    | 2018-05-14T12:54:35.000000 |
|  1 | nova-cert      | undercloud-0.redhat.local | internal | enabled | down  | 2018-05-14T01:03:38.000000 |
|  2 | nova-scheduler | undercloud-0.redhat.local | internal | enabled | up    | 2018-05-14T12:54:27.000000 |
|  3 | nova-conductor | undercloud-0.redhat.local | internal | enabled | up    | 2018-05-14T12:54:36.000000 |
|  5 | nova-compute   | undercloud-0.redhat.local | nova     | enabled | up    | 2018-05-14T12:54:35.000000 |
+----+----------------+---------------------------+----------+---------+-------+----------------------------+

@Matthew, could somebody from the Compute DFG help us triaging the issue, please?

Comment 12 Ollie Walsh 2018-05-17 19:15:07 UTC
wonder if they are actually identical, passwords too

https://github.com/openstack/puppet-nova/blob/stable/pike/manifests/cell_v2/simple_setup.pp will be used IIRC but it should be a no-op in this case

Comment 13 Lee Yarwood 2018-05-18 08:11:19 UTC
(In reply to Ollie Walsh from comment #12)
> wonder if they are actually identical, passwords too
> 
> https://github.com/openstack/puppet-nova/blob/stable/pike/manifests/cell_v2/
> simple_setup.pp will be used IIRC but it should be a no-op in this case

Yeah they appear to be the same for each :

MariaDB [nova_api]> select * from cell_mappings\G;
*************************** 1. row ***************************
         created_at: 2018-05-17 17:21:28
         updated_at: NULL
                 id: 1
               uuid: 00000000-0000-0000-0000-000000000000
               name: cell0
      transport_url: none:///
database_connection: mysql+pymysql://nova:e1723a2bbe9b5e6f13973df759cd1d68f463628d.24.1/nova_cell0
*************************** 2. row ***************************
         created_at: 2018-05-17 17:21:35
         updated_at: NULL
                 id: 2
               uuid: 79ca43ea-85c2-4868-85c4-6994613e66ac
               name: default
      transport_url: rabbit://0d13c9efcbad53c072f22b3113862294bdbe622f:ea1761890509f5c6da0ce20c40f701af688e3607.24.1//
database_connection: mysql+pymysql://nova:e1723a2bbe9b5e6f13973df759cd1d68f463628d.24.1/nova
*************************** 3. row ***************************
         created_at: 2018-05-17 17:21:51
         updated_at: NULL
                 id: 3
               uuid: 74cddb90-1b62-4bf9-86cf-053d1da93a95
               name: NULL
      transport_url: rabbit://0d13c9efcbad53c072f22b3113862294bdbe622f:ea1761890509f5c6da0ce20c40f701af688e3607.24.1//
database_connection: mysql+pymysql://nova:e1723a2bbe9b5e6f13973df759cd1d68f463628d.24.1/nova
3 rows in set (0.00 sec)

Comment 14 Lee Yarwood 2018-05-18 11:06:12 UTC
Went over this again and the issue appears to be with the 10 to 11 upgrade :

2018-05-18 06:43:46,008 INFO: Warning: Unknown variable: '::nova::db::mysql_api::setup_cell0'. at /etc/puppet/modules/nova/manifests/db/mysql.pp:53:28
2018-05-18 06:45:42,347 INFO: Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]/ensure: created
2018-05-18 06:45:59,003 INFO: Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]/ensure: created
2018-05-18 06:45:59,099 INFO: Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_192.168.24.1]/Mysql_grant[nova.24.1/nova_cell0.*]/ensure: created
2018-05-18 06:45:59,619 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Triggered 'refresh' from 1 events
2018-05-18 06:46:03,364 INFO: Notice: /Stage[main]/Nova::Cell_v2::Map_cell0/Exec[nova-cell_v2-map_cell0]: Triggered 'refresh' from 1 events
2018-05-18 06:46:03,366 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Triggered 'refresh' from 1 events
2018-05-18 06:46:10,394 INFO: Notice: /Stage[main]/Nova::Cell_v2::Simple_setup/Nova_cell_v2[default]/ensure: created
2018-05-18 06:46:27,541 INFO: Notice: /Stage[main]/Nova::Cell_v2::Map_cell_and_hosts/Exec[nova-cell_v2-map_cell_and_hosts]: Triggered 'refresh' from 1 events
2018-05-18 06:46:34,990 INFO: Notice: /Stage[main]/Nova::Cell_v2::Map_instances/Exec[nova-cell_v2-map_instances]: Triggered 'refresh' from 1 events
2018-05-18 06:50:00,205 INFO: Notice: /Stage[main]/Nova::Cell_v2::Discover_hosts/Exec[nova-cell_v2-discover_hosts]: Triggered 'refresh' from 2 events
2018-05-18 06:50:21,527 INFO:      Nova cell v2: 3.47

It appears that the map_cell_and_hosts steps is introducing the duplicate mapping:

MariaDB [nova_api]> select * from cell_mappings\G;
*************************** 1. row ***************************
         created_at: 2018-05-18 10:46:03
         updated_at: NULL
                 id: 1
               uuid: 00000000-0000-0000-0000-000000000000
               name: cell0
      transport_url: none:///
database_connection: mysql+pymysql://nova:cb31f14d745db796caed6b8df03c4ed985cc458e.24.1/nova_cell0
*************************** 2. row ***************************
         created_at: 2018-05-18 10:46:10
         updated_at: NULL
                 id: 2
               uuid: b2948248-6756-489f-a31b-cf96d0e54432
               name: default
      transport_url: rabbit://e4d5d4c1bf16d6da5a4617791b19a323ee607fd5:2b6ba8504773a3a9009f3ff8b448d4d8cfe1e213.24.1//
database_connection: mysql+pymysql://nova:cb31f14d745db796caed6b8df03c4ed985cc458e.24.1/nova
*************************** 3. row ***************************
         created_at: 2018-05-18 10:46:27
         updated_at: NULL
                 id: 3
               uuid: bffff9e7-00eb-46e1-abc5-9c7e2c03655b
               name: NULL
      transport_url: rabbit://e4d5d4c1bf16d6da5a4617791b19a323ee607fd5:2b6ba8504773a3a9009f3ff8b448d4d8cfe1e213.24.1//
database_connection: mysql+pymysql://nova:cb31f14d745db796caed6b8df03c4ed985cc458e.24.1/nova
3 rows in set (0.00 sec)

Earlier today owalsh pointed this the following nova-manage code that appears to be the problem here:

https://github.com/openstack/nova/blob/stable/ocata/nova/cmd/manage.py#L1310-L1316

Comment 15 Ollie Walsh 2018-05-18 11:25:13 UTC
Are you running without any ironic nodes?

Comment 16 Yolanda Robla 2018-05-25 07:27:30 UTC
I tested the patch, and the duplications of hypervisors were fixed.

Comment 18 Vincent S. Cojot 2018-06-19 18:16:37 UTC
Once you've hit the issue (and you're at OSP12), is there any way to 'repair' this so that you could continuer the FFU process?
Thanks,

Comment 19 Vincent S. Cojot 2018-06-19 18:26:08 UTC
Also noticed this:

[root@instack ~]# nova-manage cell_v2 list_cells
+---------+--------------------------------------+--------------------------------------------------------------------+------------------------------------------------+
|   Name  |                 UUID                 |                           Transport URL                            |              Database Connection               |
+---------+--------------------------------------+--------------------------------------------------------------------+------------------------------------------------+
|   None  | d04e50d0-f129-4052-912c-a5c6a48a9120 | rabbit://17fbfc42443a3683b25feebb945a3c93abc161e5:****@10.20.0.2// |    mysql+pymysql://nova:****@10.20.0.2/nova    |
|  cell0  | 00000000-0000-0000-0000-000000000000 |                               none:/                               | mysql+pymysql://nova:****@10.20.0.2/nova_cell0 |
| default | cb8d222e-d347-4362-9a81-4e9347346804 | rabbit://17fbfc42443a3683b25feebb945a3c93abc161e5:****@10.20.0.2// |    mysql+pymysql://nova:****@10.20.0.2/nova    |
+---------+--------------------------------------+--------------------------------------------------------------------+------------------------------------------------+

Comment 23 Ollie Walsh 2018-06-20 12:13:48 UTC
 (In reply to Vincent S. Cojot from comment #18)
> Once you've hit the issue (and you're at OSP12), is there any way to
> 'repair' this so that you could continuer the FFU process?
> Thanks,

This should work:
mysql nova_api -e "delete from instance_mappings;"'
nova-manage cell_v2 delete_cell --force --cell_uuid d04e50d0-f129-4052-912c-a5c6a48a9120 (the uuid with name None)
nova-manage cell_v2 discover_hosts --verbose
nova-manage cell_v2 map_instances --cell_uuid cb8d222e-d347-4362-9a81-4e9347346804 (the uuid with name default)

Comment 26 Vincent S. Cojot 2018-06-20 19:59:33 UTC
(In reply to Ollie Walsh from comment #23)
>  (In reply to Vincent S. Cojot from comment #18)
> > Once you've hit the issue (and you're at OSP12), is there any way to
> > 'repair' this so that you could continuer the FFU process?
> > Thanks,
> 
> This should work:
> mysql nova_api -e "delete from instance_mappings;"'
> nova-manage cell_v2 delete_cell --force --cell_uuid
> d04e50d0-f129-4052-912c-a5c6a48a9120 (the uuid with name None)
> nova-manage cell_v2 discover_hosts --verbose
> nova-manage cell_v2 map_instances --cell_uuid
> cb8d222e-d347-4362-9a81-4e9347346804 (the uuid with name default)

Hi Ollie,
I've already upgraded to OSP12 on the undercloud at this point and these steps fix -some- issues but not all.
Here's what I had (after the osp12 undercloud upgrade after which I had noticed the issue occurring again).

(undercloud) [stack@instack ~]$ sudo su -
Last login: Wed Jun 20 14:13:41 EDT 2018 on pts/1
[root@instack ~]# nova-manage cell_v2 list_cells
+---------+--------------------------------------+--------------------------------------------------------------------+------------------------------------------------+
|   Name  |                 UUID                 |                           Transport URL                            |              Database Connection               |
+---------+--------------------------------------+--------------------------------------------------------------------+------------------------------------------------+
|   None  | 3ac75ac9-3eca-4cfd-8358-a3e861abd2d4 | rabbit://17fbfc42443a3683b25feebb945a3c93abc161e5:****@10.20.0.2// |    mysql+pymysql://nova:****@10.20.0.2/nova    |
|  cell0  | 00000000-0000-0000-0000-000000000000 |                               none:/                               | mysql+pymysql://nova:****@10.20.0.2/nova_cell0 |
| default | 7a2e4926-5635-4df0-9d5a-336d2e70a695 | rabbit://17fbfc42443a3683b25feebb945a3c93abc161e5:****@10.20.0.2// |    mysql+pymysql://nova:****@10.20.0.2/nova    |
+---------+--------------------------------------+--------------------------------------------------------------------+------------------------------------------------+
[root@instack ~]# mysql --defaults-file=/root/.my.cnf nova_api -e "delete from instance_mappings;"
[root@instack ~]# nova-manage cell_v2 delete_cell --force --cell_uuid 3ac75ac9-3eca-4cfd-8358-a3e861abd2d4
[root@instack ~]# nova-manage cell_v2 discover_hosts --verbose
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'default': 7a2e4926-5635-4df0-9d5a-336d2e70a695
Found 16 unmapped computes in cell: 7a2e4926-5635-4df0-9d5a-336d2e70a695
Checking host mapping for compute host 'instack': f6fd72aa-fb01-4b14-afec-1aa6e92926b1
Creating host mapping for compute host 'instack': f6fd72aa-fb01-4b14-afec-1aa6e92926b1
Checking host mapping for compute host 'instack': 90ed87f9-a5a8-4049-b178-0516e006db6f
Checking host mapping for compute host 'instack': dbdabf7b-d3af-419a-b1ee-8aaee0732077
Checking host mapping for compute host 'instack': e0f09e42-c0a3-4e35-adc8-6000c2c3010d
Checking host mapping for compute host 'instack': 13a98ba1-99c2-40d2-827c-6c1b79f85473
Checking host mapping for compute host 'instack': af6e6aaf-fd6e-4efd-bc89-cafdaa509cab
Checking host mapping for compute host 'instack': 83accc49-52bd-42ad-be17-e97ebb6f2498
Checking host mapping for compute host 'instack': 736486cb-5e63-4c88-9aa8-6b7c474a4836
Checking host mapping for compute host 'instack': a0443200-4081-4db5-97e0-9d97cf6ef2ea
Checking host mapping for compute host 'instack': 7f5ec1f7-612f-4197-9c1d-0721920a3107
Checking host mapping for compute host 'instack': c7f9b856-acf7-4e60-baeb-6cd6544dad2f
Checking host mapping for compute host 'instack': 422a2da6-9ab6-4a51-b20f-61aee0cd532b
Checking host mapping for compute host 'instack': 265061ff-11fb-4225-8d56-ea741ceb35e8
Checking host mapping for compute host 'instack': 10b79c06-39a0-466b-b827-b25ea2a40669
Checking host mapping for compute host 'instack': b1e5f860-3f10-4e43-b030-795a22086d55
Checking host mapping for compute host 'instack': d36e8794-7685-4072-9301-c07c73cae730
[root@instack ~]# nova-manage cell_v2 map_instances --cell_uuid 7a2e4926-5635-4df0-9d5a-336d2e70a695

Comment 27 Vincent S. Cojot 2018-06-20 20:00:46 UTC
So I noticed this:
[stack@instack ~]$ openstack hypervisor list
+-----+--------------------------------------+-----------------+----------------+-------+
|  ID | Hypervisor Hostname                  | Hypervisor Type | Host IP        | State |
+-----+--------------------------------------+-----------------+----------------+-------+
| 152 | a277af3f-89a7-403a-a2bd-85c4f2532202 | ironic          | 192.168.122.99 | up    |
| 153 | 342df16e-3551-4129-bd22-744da8a8ec06 | ironic          | 192.168.122.99 | up    |
| 155 | f8c164d3-9b38-4d8f-bde0-3d1cc6297468 | ironic          | 192.168.122.99 | up    |
| 178 | 2826b8ab-a1bf-4f2c-8e00-3bcaca07f2e9 | ironic          | 192.168.122.99 | up    |
| 184 | d751194d-789d-4117-8bb1-6e0f1d89d75b | ironic          | 192.168.122.99 | up    |
| 185 | 7539bc0f-ed07-4f03-bb33-2cd9e172459d | ironic          | 192.168.122.99 | up    |
| 186 | f9cb72da-220d-4f8f-9f63-492b0b29a1a0 | ironic          | 192.168.122.99 | up    |
| 187 | f03d3ac1-e138-4d7e-a4a8-785e1a1dd5af | ironic          | 192.168.122.99 | up    |
| 188 | 2297473f-0e68-490c-815e-89b405c31d88 | ironic          | 192.168.122.99 | up    |
| 189 | b6a601cb-85a6-4f2c-9575-14b5547c3141 | ironic          | 192.168.122.99 | up    |
| 190 | 314c3e9a-ff2c-4b20-8584-d67885275db5 | ironic          | 192.168.122.99 | up    |
| 191 | a2d37b03-3ebc-4189-a7c8-b95426a77a66 | ironic          | 192.168.122.99 | up    |
| 192 | d8c34684-2f30-4aa6-a690-d22fd1b02ab1 | ironic          | 192.168.122.99 | up    |
| 193 | 0b8d535f-1593-413c-9599-38ce449ea1c0 | ironic          | 192.168.122.99 | up    |
| 194 | bb23fb3d-ed8a-4493-b49d-9a7884a541b4 | ironic          | 192.168.122.99 | up    |
| 195 | 0eb336b5-4de9-46ef-9e68-8a2622507692 | ironic          | 192.168.122.99 | up    |
+-----+--------------------------------------+-----------------+----------------+-------+

but a few minutes later, it went back to having duplicates:
[stack@instack ~]$ openstack hypervisor list
+-----+--------------------------------------+-----------------+----------------+-------+
|  ID | Hypervisor Hostname                  | Hypervisor Type | Host IP        | State |
+-----+--------------------------------------+-----------------+----------------+-------+
| 152 | a277af3f-89a7-403a-a2bd-85c4f2532202 | ironic          | 192.168.122.99 | up    |
| 153 | 342df16e-3551-4129-bd22-744da8a8ec06 | ironic          | 192.168.122.99 | up    |
| 155 | f8c164d3-9b38-4d8f-bde0-3d1cc6297468 | ironic          | 192.168.122.99 | up    |
| 178 | 2826b8ab-a1bf-4f2c-8e00-3bcaca07f2e9 | ironic          | 192.168.122.99 | up    |
| 184 | d751194d-789d-4117-8bb1-6e0f1d89d75b | ironic          | 192.168.122.99 | up    |
| 185 | 7539bc0f-ed07-4f03-bb33-2cd9e172459d | ironic          | 192.168.122.99 | up    |
| 186 | f9cb72da-220d-4f8f-9f63-492b0b29a1a0 | ironic          | 192.168.122.99 | up    |
| 187 | f03d3ac1-e138-4d7e-a4a8-785e1a1dd5af | ironic          | 192.168.122.99 | up    |
| 188 | 2297473f-0e68-490c-815e-89b405c31d88 | ironic          | 192.168.122.99 | up    |
| 189 | b6a601cb-85a6-4f2c-9575-14b5547c3141 | ironic          | 192.168.122.99 | up    |
| 190 | 314c3e9a-ff2c-4b20-8584-d67885275db5 | ironic          | 192.168.122.99 | up    |
| 191 | a2d37b03-3ebc-4189-a7c8-b95426a77a66 | ironic          | 192.168.122.99 | up    |
| 192 | d8c34684-2f30-4aa6-a690-d22fd1b02ab1 | ironic          | 192.168.122.99 | up    |
| 193 | 0b8d535f-1593-413c-9599-38ce449ea1c0 | ironic          | 192.168.122.99 | up    |
| 194 | bb23fb3d-ed8a-4493-b49d-9a7884a541b4 | ironic          | 192.168.122.99 | up    |
| 195 | 0eb336b5-4de9-46ef-9e68-8a2622507692 | ironic          | 192.168.122.99 | up    |
| 152 | a277af3f-89a7-403a-a2bd-85c4f2532202 | ironic          | 192.168.122.99 | up    |
| 153 | 342df16e-3551-4129-bd22-744da8a8ec06 | ironic          | 192.168.122.99 | up    |
| 155 | f8c164d3-9b38-4d8f-bde0-3d1cc6297468 | ironic          | 192.168.122.99 | up    |
| 178 | 2826b8ab-a1bf-4f2c-8e00-3bcaca07f2e9 | ironic          | 192.168.122.99 | up    |
| 184 | d751194d-789d-4117-8bb1-6e0f1d89d75b | ironic          | 192.168.122.99 | up    |
| 185 | 7539bc0f-ed07-4f03-bb33-2cd9e172459d | ironic          | 192.168.122.99 | up    |
| 186 | f9cb72da-220d-4f8f-9f63-492b0b29a1a0 | ironic          | 192.168.122.99 | up    |
| 187 | f03d3ac1-e138-4d7e-a4a8-785e1a1dd5af | ironic          | 192.168.122.99 | up    |
| 188 | 2297473f-0e68-490c-815e-89b405c31d88 | ironic          | 192.168.122.99 | up    |
| 189 | b6a601cb-85a6-4f2c-9575-14b5547c3141 | ironic          | 192.168.122.99 | up    |
| 190 | 314c3e9a-ff2c-4b20-8584-d67885275db5 | ironic          | 192.168.122.99 | up    |
| 191 | a2d37b03-3ebc-4189-a7c8-b95426a77a66 | ironic          | 192.168.122.99 | up    |
| 192 | d8c34684-2f30-4aa6-a690-d22fd1b02ab1 | ironic          | 192.168.122.99 | up    |
| 193 | 0b8d535f-1593-413c-9599-38ce449ea1c0 | ironic          | 192.168.122.99 | up    |
| 194 | bb23fb3d-ed8a-4493-b49d-9a7884a541b4 | ironic          | 192.168.122.99 | up    |
| 195 | 0eb336b5-4de9-46ef-9e68-8a2622507692 | ironic          | 192.168.122.99 | up    |
+-----+--------------------------------------+-----------------+----------------+-------+

Comment 28 Vincent S. Cojot 2018-06-20 20:03:12 UTC
More funky stuff (OSP12 undercloud):
[stack@instack ~]$ openstack compute service list
+----+----------------+---------+----------+---------+-------+----------------------------+
| ID | Binary         | Host    | Zone     | Status  | State | Updated At                 |
+----+----------------+---------+----------+---------+-------+----------------------------+
|  1 | nova-cert      | instack | internal | enabled | down  | 2018-06-20T19:00:46.000000 |
|  2 | nova-scheduler | instack | internal | enabled | up    | 2018-06-20T20:01:59.000000 |
|  3 | nova-conductor | instack | internal | enabled | up    | 2018-06-20T20:02:03.000000 |
|  4 | nova-compute   | instack | nova     | enabled | up    | 2018-06-20T20:02:05.000000 |
+----+----------------+---------+----------+---------+-------+----------------------------+

Then a few seconds later:

[stack@instack ~]$ openstack compute service list
+----+----------------+---------+----------+---------+-------+----------------------------+
| ID | Binary         | Host    | Zone     | Status  | State | Updated At                 |
+----+----------------+---------+----------+---------+-------+----------------------------+
|  1 | nova-cert      | instack | internal | enabled | down  | 2018-06-20T19:00:46.000000 |
|  2 | nova-scheduler | instack | internal | enabled | up    | 2018-06-20T20:02:09.000000 |
|  3 | nova-conductor | instack | internal | enabled | up    | 2018-06-20T20:02:13.000000 |
|  4 | nova-compute   | instack | nova     | enabled | up    | 2018-06-20T20:02:15.000000 |
|  1 | nova-cert      | instack | internal | enabled | down  | 2018-06-20T19:00:46.000000 |
|  2 | nova-scheduler | instack | internal | enabled | up    | 2018-06-20T20:02:09.000000 |
|  3 | nova-conductor | instack | internal | enabled | up    | 2018-06-20T20:02:13.000000 |
|  4 | nova-compute   | instack | nova     | enabled | up    | 2018-06-20T20:02:15.000000 |
+----+----------------+---------+----------+---------+-------+----------------------------+

Comment 29 Vincent S. Cojot 2018-06-20 20:04:44 UTC
Also to note (ever since I deleted cell 'None'):

[stack@instack ~]$ nova list
+--------------------------------------+--------------+--------+------------+-------------+----------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks             |
+--------------------------------------+--------------+--------+------------+-------------+----------------------+
| 6b10a9ad-b578-4915-adfb-3d47d1b2e1e0 | krynn-ceph-0 | ERROR  | -          | Running     | ctlplane=10.20.0.106 |
| 73f55c70-1621-4d54-a6f9-0ee26e7701c4 | krynn-cmpt-0 | ERROR  | -          | Running     | ctlplane=10.20.0.107 |
| a80880bd-9eea-4551-882d-e4eaa2214893 | krynn-cmpt-1 | ERROR  | -          | Running     | ctlplane=10.20.0.111 |
| a26f221e-1161-45d0-b271-88df89280d12 | krynn-ctrl-0 | ERROR  | -          | Running     | ctlplane=10.20.0.109 |
| 728ba4ef-a75e-4ec2-8985-5a38ccabaef7 | krynn-ctrl-1 | ERROR  | -          | Running     | ctlplane=10.20.0.108 |
| e5696dba-81a1-41d4-b33a-e55007c1f339 | krynn-ctrl-2 | ERROR  | -          | Running     | ctlplane=10.20.0.103 |
+--------------------------------------+--------------+--------+------------+-------------+----------------------+

Comment 30 Vincent S. Cojot 2018-06-20 20:24:40 UTC
I rebooted the OSP12 undercloud and it seems much more consistent and stable now.. so the nova-manage steps may not be sufficient, it may also require restarting services:

(undercloud) [stack@instack ~]$ openstack compute service list
+----+----------------+---------+----------+---------+-------+----------------------------+
| ID | Binary         | Host    | Zone     | Status  | State | Updated At                 |
+----+----------------+---------+----------+---------+-------+----------------------------+
|  1 | nova-cert      | instack | internal | enabled | down  | 2018-06-20T19:00:46.000000 |
|  2 | nova-scheduler | instack | internal | enabled | up    | 2018-06-20T20:23:03.000000 |
|  3 | nova-conductor | instack | internal | enabled | up    | 2018-06-20T20:23:02.000000 |
|  4 | nova-compute   | instack | nova     | enabled | up    | 2018-06-20T20:23:08.000000 |
+----+----------------+---------+----------+---------+-------+----------------------------+
(undercloud) [stack@instack ~]$ nova list
+--------------------------------------+--------------+--------+------------+-------------+----------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks             |
+--------------------------------------+--------------+--------+------------+-------------+----------------------+
| 6b10a9ad-b578-4915-adfb-3d47d1b2e1e0 | krynn-ceph-0 | ACTIVE | -          | Running     | ctlplane=10.20.0.106 |
| 73f55c70-1621-4d54-a6f9-0ee26e7701c4 | krynn-cmpt-0 | ACTIVE | -          | Running     | ctlplane=10.20.0.107 |
| a80880bd-9eea-4551-882d-e4eaa2214893 | krynn-cmpt-1 | ACTIVE | -          | Running     | ctlplane=10.20.0.111 |
| a26f221e-1161-45d0-b271-88df89280d12 | krynn-ctrl-0 | ACTIVE | -          | Running     | ctlplane=10.20.0.109 |
| 728ba4ef-a75e-4ec2-8985-5a38ccabaef7 | krynn-ctrl-1 | ACTIVE | -          | Running     | ctlplane=10.20.0.108 |
| e5696dba-81a1-41d4-b33a-e55007c1f339 | krynn-ctrl-2 | ACTIVE | -          | Running     | ctlplane=10.20.0.103 |
+--------------------------------------+--------------+--------+------------+-------------+----------------------+
(undercloud) [stack@instack ~]$ openstack hypervisor list
+-----+--------------------------------------+-----------------+----------------+-------+
|  ID | Hypervisor Hostname                  | Hypervisor Type | Host IP        | State |
+-----+--------------------------------------+-----------------+----------------+-------+
| 152 | a277af3f-89a7-403a-a2bd-85c4f2532202 | ironic          | 192.168.122.99 | up    |
| 153 | 342df16e-3551-4129-bd22-744da8a8ec06 | ironic          | 192.168.122.99 | up    |
| 155 | f8c164d3-9b38-4d8f-bde0-3d1cc6297468 | ironic          | 192.168.122.99 | up    |
| 178 | 2826b8ab-a1bf-4f2c-8e00-3bcaca07f2e9 | ironic          | 192.168.122.99 | up    |
| 184 | d751194d-789d-4117-8bb1-6e0f1d89d75b | ironic          | 192.168.122.99 | up    |
| 185 | 7539bc0f-ed07-4f03-bb33-2cd9e172459d | ironic          | 192.168.122.99 | up    |
| 186 | f9cb72da-220d-4f8f-9f63-492b0b29a1a0 | ironic          | 192.168.122.99 | up    |
| 187 | f03d3ac1-e138-4d7e-a4a8-785e1a1dd5af | ironic          | 192.168.122.99 | up    |
| 188 | 2297473f-0e68-490c-815e-89b405c31d88 | ironic          | 192.168.122.99 | up    |
| 189 | b6a601cb-85a6-4f2c-9575-14b5547c3141 | ironic          | 192.168.122.99 | up    |
| 190 | 314c3e9a-ff2c-4b20-8584-d67885275db5 | ironic          | 192.168.122.99 | up    |
| 191 | a2d37b03-3ebc-4189-a7c8-b95426a77a66 | ironic          | 192.168.122.99 | up    |
| 192 | d8c34684-2f30-4aa6-a690-d22fd1b02ab1 | ironic          | 192.168.122.99 | up    |
| 193 | 0b8d535f-1593-413c-9599-38ce449ea1c0 | ironic          | 192.168.122.99 | up    |
| 194 | bb23fb3d-ed8a-4493-b49d-9a7884a541b4 | ironic          | 192.168.122.99 | up    |
| 195 | 0eb336b5-4de9-46ef-9e68-8a2622507692 | ironic          | 192.168.122.99 | up    |
+-----+--------------------------------------+-----------------+----------------+-------+

Comment 31 Ollie Walsh 2018-06-20 21:07:51 UTC
(In reply to Vincent S. Cojot from comment #30)
> I rebooted the OSP12 undercloud and it seems much more consistent and stable
> now.. so the nova-manage steps may not be sufficient, it may also require
> restarting services:
> 
> (undercloud) [stack@instack ~]$ openstack compute service list
> +----+----------------+---------+----------+---------+-------+---------------
> -------------+
> | ID | Binary         | Host    | Zone     | Status  | State | Updated At   
> |
> +----+----------------+---------+----------+---------+-------+---------------
> -------------+
> |  1 | nova-cert      | instack | internal | enabled | down  |
> 2018-06-20T19:00:46.000000 |
> |  2 | nova-scheduler | instack | internal | enabled | up    |
> 2018-06-20T20:23:03.000000 |
> |  3 | nova-conductor | instack | internal | enabled | up    |
> 2018-06-20T20:23:02.000000 |
> |  4 | nova-compute   | instack | nova     | enabled | up    |
> 2018-06-20T20:23:08.000000 |
> +----+----------------+---------+----------+---------+-------+---------------
> -------------+
> (undercloud) [stack@instack ~]$ nova list
> +--------------------------------------+--------------+--------+------------
> +-------------+----------------------+
> | ID                                   | Name         | Status | Task State
> | Power State | Networks             |
> +--------------------------------------+--------------+--------+------------
> +-------------+----------------------+
> | 6b10a9ad-b578-4915-adfb-3d47d1b2e1e0 | krynn-ceph-0 | ACTIVE | -         
> | Running     | ctlplane=10.20.0.106 |
> | 73f55c70-1621-4d54-a6f9-0ee26e7701c4 | krynn-cmpt-0 | ACTIVE | -         
> | Running     | ctlplane=10.20.0.107 |
> | a80880bd-9eea-4551-882d-e4eaa2214893 | krynn-cmpt-1 | ACTIVE | -         
> | Running     | ctlplane=10.20.0.111 |
> | a26f221e-1161-45d0-b271-88df89280d12 | krynn-ctrl-0 | ACTIVE | -         
> | Running     | ctlplane=10.20.0.109 |
> | 728ba4ef-a75e-4ec2-8985-5a38ccabaef7 | krynn-ctrl-1 | ACTIVE | -         
> | Running     | ctlplane=10.20.0.108 |
> | e5696dba-81a1-41d4-b33a-e55007c1f339 | krynn-ctrl-2 | ACTIVE | -         
> | Running     | ctlplane=10.20.0.103 |
> +--------------------------------------+--------------+--------+------------
> +-------------+----------------------+
> (undercloud) [stack@instack ~]$ openstack hypervisor list
> +-----+--------------------------------------+-----------------+-------------
> ---+-------+
> |  ID | Hypervisor Hostname                  | Hypervisor Type | Host IP    
> | State |
> +-----+--------------------------------------+-----------------+-------------
> ---+-------+
> | 152 | a277af3f-89a7-403a-a2bd-85c4f2532202 | ironic          |
> 192.168.122.99 | up    |
> | 153 | 342df16e-3551-4129-bd22-744da8a8ec06 | ironic          |
> 192.168.122.99 | up    |
> | 155 | f8c164d3-9b38-4d8f-bde0-3d1cc6297468 | ironic          |
> 192.168.122.99 | up    |
> | 178 | 2826b8ab-a1bf-4f2c-8e00-3bcaca07f2e9 | ironic          |
> 192.168.122.99 | up    |
> | 184 | d751194d-789d-4117-8bb1-6e0f1d89d75b | ironic          |
> 192.168.122.99 | up    |
> | 185 | 7539bc0f-ed07-4f03-bb33-2cd9e172459d | ironic          |
> 192.168.122.99 | up    |
> | 186 | f9cb72da-220d-4f8f-9f63-492b0b29a1a0 | ironic          |
> 192.168.122.99 | up    |
> | 187 | f03d3ac1-e138-4d7e-a4a8-785e1a1dd5af | ironic          |
> 192.168.122.99 | up    |
> | 188 | 2297473f-0e68-490c-815e-89b405c31d88 | ironic          |
> 192.168.122.99 | up    |
> | 189 | b6a601cb-85a6-4f2c-9575-14b5547c3141 | ironic          |
> 192.168.122.99 | up    |
> | 190 | 314c3e9a-ff2c-4b20-8584-d67885275db5 | ironic          |
> 192.168.122.99 | up    |
> | 191 | a2d37b03-3ebc-4189-a7c8-b95426a77a66 | ironic          |
> 192.168.122.99 | up    |
> | 192 | d8c34684-2f30-4aa6-a690-d22fd1b02ab1 | ironic          |
> 192.168.122.99 | up    |
> | 193 | 0b8d535f-1593-413c-9599-38ce449ea1c0 | ironic          |
> 192.168.122.99 | up    |
> | 194 | bb23fb3d-ed8a-4493-b49d-9a7884a541b4 | ironic          |
> 192.168.122.99 | up    |
> | 195 | 0eb336b5-4de9-46ef-9e68-8a2622507692 | ironic          |
> 192.168.122.99 | up    |
> +-----+--------------------------------------+-----------------+-------------
> ---+-------+

Yes, I forgot to include the service restart.

Comment 32 Chris Janiszewski 2018-06-21 20:27:25 UTC
I have hit the same problem, but did not notice it until my undercloud has been already at OSP13.

It seems like I can no longer clean this up and go to the working state:
(undercloud) [stack@chrisjffu-undercloud ~]$ sudo nova-manage cell_v2 list_cells
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
+---------+--------------------------------------+----------------------------------------------------------------------+--------------------------------------------------+
|   Name  |                 UUID                 |                            Transport URL                             |               Database Connection                |
+---------+--------------------------------------+----------------------------------------------------------------------+--------------------------------------------------+
|   None  | dff89375-57f1-4de8-9441-0f8d0d8c117e | rabbit://ba493d27b2d57bb8aefff698625f850afe5ae5a4:****@172.16.0.11// |    mysql+pymysql://nova:****@172.16.0.11/nova    |
|  cell0  | 00000000-0000-0000-0000-000000000000 |                                none:/                                | mysql+pymysql://nova:****@172.16.0.11/nova_cell0 |
| default | b02c28e0-b610-4fb5-9c81-f54aa072ac5e | rabbit://ba493d27b2d57bb8aefff698625f850afe5ae5a4:****@172.16.0.11// |    mysql+pymysql://nova:****@172.16.0.11/nova    |
+---------+--------------------------------------+----------------------------------------------------------------------+--------------------------------------------------+
(undercloud) [stack@chrisjffu-undercloud ~]$ sudo mysql nova_api -e "delete from instance_mappings;"
(undercloud) [stack@chrisjffu-undercloud ~]$ ova-manage cell_v2 delete_cell --force --cell_uuid dff89375-57f1-4de8-9441-0f8d0d8c117e
-bash: ova-manage: command not found
(undercloud) [stack@chrisjffu-undercloud ~]$ sudo nova-manage cell_v2 delete_cell --force --cell_uuid dff89375-57f1-4de8-9441-0f8d0d8c117e
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning

I will attach sosreport below. Is there any workaround for that?

Comment 33 Chris Janiszewski 2018-06-21 20:34:34 UTC
Created attachment 1453605 [details]
undercloud-sosreport-duplicate-hypervisors

Comment 34 Ollie Walsh 2018-06-21 20:45:58 UTC
(In reply to Chris Janiszewski from comment #32)
> I have hit the same problem, but did not notice it until my undercloud has
> been already at OSP13.
> 
> It seems like I can no longer clean this up and go to the working state:
> (undercloud) [stack@chrisjffu-undercloud ~]$ sudo nova-manage cell_v2
> list_cells
> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
> NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
>   exception.NotSupportedWarning
> +---------+--------------------------------------+---------------------------
> -------------------------------------------+---------------------------------
> -----------------+
> |   Name  |                 UUID                 |                          
> Transport URL                             |               Database
> Connection                |
> +---------+--------------------------------------+---------------------------
> -------------------------------------------+---------------------------------
> -----------------+
> |   None  | dff89375-57f1-4de8-9441-0f8d0d8c117e |
> rabbit://ba493d27b2d57bb8aefff698625f850afe5ae5a4:****@172.16.0.11// |   
> mysql+pymysql://nova:****@172.16.0.11/nova    |
> |  cell0  | 00000000-0000-0000-0000-000000000000 |                          
> none:/                                |
> mysql+pymysql://nova:****@172.16.0.11/nova_cell0 |
> | default | b02c28e0-b610-4fb5-9c81-f54aa072ac5e |
> rabbit://ba493d27b2d57bb8aefff698625f850afe5ae5a4:****@172.16.0.11// |   
> mysql+pymysql://nova:****@172.16.0.11/nova    |
> +---------+--------------------------------------+---------------------------
> -------------------------------------------+---------------------------------
> -----------------+
> (undercloud) [stack@chrisjffu-undercloud ~]$ sudo mysql nova_api -e "delete
> from instance_mappings;"
> (undercloud) [stack@chrisjffu-undercloud ~]$ ova-manage cell_v2 delete_cell
> --force --cell_uuid dff89375-57f1-4de8-9441-0f8d0d8c117e
> -bash: ova-manage: command not found
> (undercloud) [stack@chrisjffu-undercloud ~]$ sudo nova-manage cell_v2
> delete_cell --force --cell_uuid dff89375-57f1-4de8-9441-0f8d0d8c117e
> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
> NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
>   exception.NotSupportedWarning
> 
> I will attach sosreport below. Is there any workaround for that?

Not seeing anything wrong here... did you run the rest of the commands and restart services?

Comment 35 Chris Janiszewski 2018-06-22 11:44:46 UTC
(In reply to Ollie Walsh from comment #34)

> Not seeing anything wrong here... did you run the rest of the commands and
> restart services?

This looked like an error, so I have stopped the process and decided to go back to snapshot from undercloud OSP10 instead. Attempting to repeat the process with a newer set of packages to avoid the error and manual fixes.

Comment 39 errata-xmlrpc 2018-06-27 20:16:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2099


Note You need to log in before you can comment on or make changes to this bug.