Bug 1853281 - [Upgrade OSP13->OSP16.1] service catalog cinderv3 is duplicated [OVS/ML2 -> OVN]
Summary: [Upgrade OSP13->OSP16.1] service catalog cinderv3 is duplicated [OVS/ML2 -> OVN]
Keywords:
Status: CLOSED DUPLICATE of bug 1878492
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-heat-templates
Version: 16.1 (Train)
Hardware: All
OS: All
high
high
Target Milestone: z3
: 16.1 (Train on RHEL 8.2)
Assignee: Giulio Fidente
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-02 10:51 UTC by Alberto Gonzalez
Modified: 2020-09-29 16:09 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-29 16:09:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1886407 0 None None None 2020-07-06 10:10:53 UTC
OpenStack gerrit 739451 0 None ABANDONED Prefer unversioned endpoint for cinder v3 API 2021-02-13 17:12:01 UTC

Description Alberto Gonzalez 2020-07-02 10:51:12 UTC
Description of problem:

After upgrading the environment from RHOSP13 to RHOSP16.1, I have tried to convert ML2/OVS to OVN and is failing due to duplicated service catalog

Guide followed:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/networking_with_open_virtual_network/migrating-ml2ovs-to-ovn


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Run: ovn_migration.sh start-migration

Actual results:

failed: [undercloud] (item={'started': 1, 'finished': 0, 'ansible_job_id': '496091167615.23522', 'results_file': '/root/.ansible_async/496091167615.23522', 'changed': True, 'failed': False, 'tripleo_keystone_res
ources_data': {'key': 'cinderv3', 'value': {'endpoints': {'admin': 'http://[fd00:fd00:fd00:2000::18]:8776/v3/%(tenant_id)s', 'internal': 'http://[fd00:fd00:fd00:2000::18]:8776/v3/%(tenant_id)s', 'public': 'http:
//[2001:db8:fd00:1000::19]:8776/v3/%(tenant_id)s'}, 'region': 'regionOne', 'service': 'volumev3', 'users': {'cinderv3': {'password': 'XXX', 'roles': ['admin', 'service']}}}}, 'ansible_loop_
var': 'tripleo_keystone_resources_data'}) => {"ansible_job_id": "496091167615.23522", "ansible_loop_var": "tripleo_keystone_resources_endpoint_async_result_item", "attempts": 1, "changed": false, "finished": 1, 
"msg": "Multiple matches found for cinderv3", "tripleo_keystone_resources_endpoint_async_result_item": {"ansible_job_id": "496091167615.23522", "ansible_loop_var": "tripleo_keystone_resources_data", "changed": t
rue, "failed": false, "finished": 0, "results_file": "/root/.ansible_async/496091167615.23522", "started": 1, "tripleo_keystone_resources_data": {"key": "cinderv3", "value": {"endpoints": {"admin": "http://[fd00
:fd00:fd00:2000::18]:8776/v3/%(tenant_id)s", "internal": "http://[fd00:fd00:fd00:2000::18]:8776/v3/%(tenant_id)s", "public": "http://[2001:db8:fd00:1000::19]:8776/v3/%(tenant_id)s"}, "region": "regionOne", "serv
ice": "volumev3", "users": {"cinderv3": {"password": "XXX", "roles": ["admin", "service"]}}}}}}


Expected results:

Migration completed


Additional info:

(overcloud) [stack@undercloud ~]$ openstack catalog list -c Name -f value|grep -c cinderv3
2

Comment 1 Giulio Fidente 2020-07-06 10:01:54 UTC
It looks like we're creating some new endpoints in 16.1 because not only the service *name* is set to "cinderv3" but the service *type* as well, hence it doesn't refresh the existing service type (which is set to "cinder") but rather creates some new ones:

| 029ea458062140d9ae4f4ccd2c9766db | regionOne | cinderv3     | volumev3       | True    | internal  | http://10.20.0.18:8776/v3/%(tenant_id)s
| 0b5de1e471b543209321c5e46619b823 | regionOne | cinderv2     | volumev2       | True    | public    | http://10.1.0.100:8776/v2/%(tenant_id)s
| 0fd68734f3b84090a64c9fb29b8aa492 | regionOne | cinderv3     | volume         | True    | internal  | http://10.20.0.18:8776/v3/%(tenant_id)s
| 13d39a29f87c417ca395845b197c53bf | regionOne | cinderv3     | volumev3       | True    | public    | http://10.1.0.100:8776/v3/%(tenant_id)s
| 367120213b2a47bb9ed005c1de66d4e2 | regionOne | cinderv2     | volumev2       | True    | admin     | http://10.20.0.18:8776/v2/%(tenant_id)s
| 82d860dafce74fe28986b64be5d58702 | regionOne | cinderv3     | volume         | True    | public    | http://10.1.0.100:8776/v3/%(tenant_id)s
| a9a4333bffcf450f9bc0df3576675972 | regionOne | cinderv2     | volumev2       | True    | internal  | http://10.20.0.18:8776/v2/%(tenant_id)s
| bf9c18b1e1174662bc7bb2f9af32e3c7 | regionOne | cinderv3     | volumev3       | True    | admin     | http://10.20.0.18:8776/v3/%(tenant_id)s
| db31919f88274cf2b7a8a37280720bc3 | regionOne | cinderv3     | volume         | True    | admin     | http://10.20.0.18:8776/v3/%(tenant_id)s

Apparently we started using "volume3" with [1]; Luigi, Eric or Gorka, is there any reason to prefer "volume" or "volumev3" for the v3 endpoints?

Basing on that, I guess we'll have to fix the 16.1 templates to behave accordingly.

1. https://github.com/openstack/tripleo-heat-templates/commit/32279c4a327ff1f00ccfd0376d27745470863330

Comment 2 Alan Bishop 2020-07-06 15:23:03 UTC
As I noted in the upstream gerrit review, I believe you'll find all of the endpoints were inherited from the osp-13 deployment. The intent is to provide "volume2" and "volume3" services, but a "volume" service (which traditionally was used by cinder's v1 API) had to be retained in queens to fix a tempest issue [1].

[1] https://bugs.launchpad.net/tripleo/+bug/1822080 (no BZ)

I think the upgrade process could remove the obsolete "volume" service (cinder's v1 API was removed in Queens).

Comment 6 Giulio Fidente 2020-07-06 16:00:52 UTC
Workaround is to remove manually the "volume" service type endpoints from keystone before launching the ovn migration tool.

Comment 7 Luigi Toscano 2020-08-21 13:35:22 UTC
While the upgrade process could remove the "volume" service, if this is a valid configuration for the catalog, couldn't the OVS->OVN migration script be too strict? I think it should just work in this scenario.

Comment 9 Luigi Toscano 2020-09-29 16:09:20 UTC
See https://bugzilla.redhat.com/show_bug.cgi?id=1856906#c13 for more details. This is a duplicate of bug 1878492.

*** This bug has been marked as a duplicate of bug 1878492 ***


Note You need to log in before you can comment on or make changes to this bug.