Bug 1260717 - Cinder's nova catalog configuration is not set
Cinder's nova catalog configuration is not set
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-puppet-modules (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
high Severity high
: z3
: 7.0 (Kilo)
Assigned To: Martin Magr
Gabriel Szasz
: Triaged, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-07 09:57 EDT by Gorka Eguileor
Modified: 2016-04-26 19:39 EDT (History)
8 users (show)

See Also:
Fixed In Version: openstack-puppet-modules-2015.1.8-24.el7ost
Doc Type: Bug Fix
Doc Text:
With this update, use the following workaround for this issue: In the cinder.conf file, update the following parameters. DEFAULT/nova_catalog_info = compute:nova:publicURL DEFAULT/nova_catalog_admin_info = compute:nova:publicURL Or, rename the Compute endpoints from 'nova' to 'Compute Service'.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-21 12:10:11 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 219284 None None None Never
OpenStack gerrit 222120 None None None Never
OpenStack gerrit 230297 None None None Never
Red Hat Product Errata RHBA-2015:2677 normal SHIPPED_LIVE openstack-packstack and openstack-puppet-modules bug fix advisory 2015-12-21 16:58:17 EST

  None (edit)
Description Gorka Eguileor 2015-09-07 09:57:01 EDT
Cinder's nova catalog configuration does not match keystone's service name and during some operations (like migration of in-use volumes) so we get errors in the logs of the kind "EndpointNotFound", as happened when QA tried to test a migration fix [1].

Originally cinder's default values for "nova_catalog_info" and "nova_catalog_admin_info" configuration entries were "compute:nova:publicURL" and "compute:nova:adminURL" but after defaults were synchronized [2] with keystone's template [3] they were changed to "compute:Compute Service:publicURL" and "compute:Compute Service:adminURL".

So cinder default values for "nova_catalog_info" and "nova_catalog_admin_info" are no longer valid for the installation and they need to be explicitly configured.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1255440#c4
[2]: https://github.com/openstack/cinder/commit/5ad15c040fdc115bca9efb1c952279988a2a48b3
[3]: https://github.com/openstack/keystone/blob/master/etc/default_catalog.templates#L12
Comment 3 Emilien Macchi 2015-09-09 08:36:47 EDT
Something is already happening upstream, you should participate to https://review.openstack.org/#/c/219284/ review.
Comment 4 Emilien Macchi 2015-10-01 16:54:21 EDT
Martin, I guess we can patch THT to support the nova_catalog_info parameter, now the upstream patch is merged.
Comment 5 Martin Magr 2015-10-02 03:40:06 EDT
The patch was merged in master, which is M release. We have to also backport it to stable/kilo.
Comment 7 Perry Myers 2015-10-16 11:46:08 EDT
(In reply to Gorka Eguileor from comment #0)
> Cinder's nova catalog configuration does not match keystone's service name
> and during some operations (like migration of in-use volumes) so we get
> errors in the logs of the kind "EndpointNotFound", as happened when QA tried
> to test a migration fix [1].

What is the impact on the end user aside from errors appearing in logs?

Does this prevent any volume migrations from working at all? Does it effectively and completely break that feature? Or are the errors cosmetic? 

If the feature of migrating volumes is completely broken, I would consider this a high severity bug vs. medium.

Please provide more info on customer impact.
Comment 8 Eric Harney 2015-10-16 11:57:35 EDT
(In reply to Perry Myers from comment #7)

This causes a failure for any functionality where Cinder has to talk to Nova.

That functionality is:
  - volume migration/retype of volumes while they are attached to instances
  - instance locality filter in the scheduler (not sure what this failure looks like exactly)
  - create/delete snapshots for nova-assisted drivers (GlusterFS)

Given the number of customer issues we've dealt with around migration in general, high severity seems reasonable to me.
Comment 13 errata-xmlrpc 2015-12-21 12:10:11 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2015:2677

Note You need to log in before you can comment on or make changes to this bug.