Bug 1260717 - Cinder's nova catalog configuration is not set
Summary: Cinder's nova catalog configuration is not set
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-puppet-modules
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z3
: 7.0 (Kilo)
Assignee: Martin Magr
QA Contact: Gabriel Szasz
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-07 13:57 UTC by Gorka Eguileor
Modified: 2023-02-22 23:02 UTC (History)
7 users (show)

Fixed In Version: openstack-puppet-modules-2015.1.8-24.el7ost
Doc Type: Bug Fix
Doc Text:
With this update, use the following workaround for this issue: In the cinder.conf file, update the following parameters. DEFAULT/nova_catalog_info = compute:nova:publicURL DEFAULT/nova_catalog_admin_info = compute:nova:publicURL Or, rename the Compute endpoints from 'nova' to 'Compute Service'.
Clone Of:
Environment:
Last Closed: 2015-12-21 17:10:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 219284 0 None None None Never
OpenStack gerrit 222120 0 None None None Never
OpenStack gerrit 230297 0 None None None Never
Red Hat Product Errata RHBA-2015:2677 0 normal SHIPPED_LIVE openstack-packstack and openstack-puppet-modules bug fix advisory 2015-12-21 21:58:17 UTC

Description Gorka Eguileor 2015-09-07 13:57:01 UTC
Cinder's nova catalog configuration does not match keystone's service name and during some operations (like migration of in-use volumes) so we get errors in the logs of the kind "EndpointNotFound", as happened when QA tried to test a migration fix [1].

Originally cinder's default values for "nova_catalog_info" and "nova_catalog_admin_info" configuration entries were "compute:nova:publicURL" and "compute:nova:adminURL" but after defaults were synchronized [2] with keystone's template [3] they were changed to "compute:Compute Service:publicURL" and "compute:Compute Service:adminURL".

So cinder default values for "nova_catalog_info" and "nova_catalog_admin_info" are no longer valid for the installation and they need to be explicitly configured.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1255440#c4
[2]: https://github.com/openstack/cinder/commit/5ad15c040fdc115bca9efb1c952279988a2a48b3
[3]: https://github.com/openstack/keystone/blob/master/etc/default_catalog.templates#L12

Comment 3 Emilien Macchi 2015-09-09 12:36:47 UTC
Something is already happening upstream, you should participate to https://review.openstack.org/#/c/219284/ review.

Comment 4 Emilien Macchi 2015-10-01 20:54:21 UTC
Martin, I guess we can patch THT to support the nova_catalog_info parameter, now the upstream patch is merged.

Comment 5 Martin Magr 2015-10-02 07:40:06 UTC
The patch was merged in master, which is M release. We have to also backport it to stable/kilo.

Comment 7 Perry Myers 2015-10-16 15:46:08 UTC
(In reply to Gorka Eguileor from comment #0)
> Cinder's nova catalog configuration does not match keystone's service name
> and during some operations (like migration of in-use volumes) so we get
> errors in the logs of the kind "EndpointNotFound", as happened when QA tried
> to test a migration fix [1].

What is the impact on the end user aside from errors appearing in logs?

Does this prevent any volume migrations from working at all? Does it effectively and completely break that feature? Or are the errors cosmetic? 

If the feature of migrating volumes is completely broken, I would consider this a high severity bug vs. medium.

Please provide more info on customer impact.

Comment 8 Eric Harney 2015-10-16 15:57:35 UTC
(In reply to Perry Myers from comment #7)

This causes a failure for any functionality where Cinder has to talk to Nova.

That functionality is:
  - volume migration/retype of volumes while they are attached to instances
  - instance locality filter in the scheduler (not sure what this failure looks like exactly)
  - create/delete snapshots for nova-assisted drivers (GlusterFS)

Given the number of customer issues we've dealt with around migration in general, high severity seems reasonable to me.

Comment 13 errata-xmlrpc 2015-12-21 17:10:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2015:2677


Note You need to log in before you can comment on or make changes to this bug.