Cinder's nova catalog configuration does not match keystone's service name and during some operations (like migration of in-use volumes) so we get errors in the logs of the kind "EndpointNotFound", as happened when QA tried to test a migration fix [1]. Originally cinder's default values for "nova_catalog_info" and "nova_catalog_admin_info" configuration entries were "compute:nova:publicURL" and "compute:nova:adminURL" but after defaults were synchronized [2] with keystone's template [3] they were changed to "compute:Compute Service:publicURL" and "compute:Compute Service:adminURL". So cinder default values for "nova_catalog_info" and "nova_catalog_admin_info" are no longer valid for the installation and they need to be explicitly configured. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1255440#c4 [2]: https://github.com/openstack/cinder/commit/5ad15c040fdc115bca9efb1c952279988a2a48b3 [3]: https://github.com/openstack/keystone/blob/master/etc/default_catalog.templates#L12
Something is already happening upstream, you should participate to https://review.openstack.org/#/c/219284/ review.
Martin, I guess we can patch THT to support the nova_catalog_info parameter, now the upstream patch is merged.
The patch was merged in master, which is M release. We have to also backport it to stable/kilo.
(In reply to Gorka Eguileor from comment #0) > Cinder's nova catalog configuration does not match keystone's service name > and during some operations (like migration of in-use volumes) so we get > errors in the logs of the kind "EndpointNotFound", as happened when QA tried > to test a migration fix [1]. What is the impact on the end user aside from errors appearing in logs? Does this prevent any volume migrations from working at all? Does it effectively and completely break that feature? Or are the errors cosmetic? If the feature of migrating volumes is completely broken, I would consider this a high severity bug vs. medium. Please provide more info on customer impact.
(In reply to Perry Myers from comment #7) This causes a failure for any functionality where Cinder has to talk to Nova. That functionality is: - volume migration/retype of volumes while they are attached to instances - instance locality filter in the scheduler (not sure what this failure looks like exactly) - create/delete snapshots for nova-assisted drivers (GlusterFS) Given the number of customer issues we've dealt with around migration in general, high severity seems reasonable to me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2015:2677