Description of problem: During volume retypes cinder is failing in the "copy volume" phase with the following exception. 2018-11-15 23:52:17.451 221994 WARNING keystoneauth.identity.generic.base [req-fc9d12c5-a89b-44d5-b3e4-e4fc91df698a a97e0067446443af9eac9fd60f4c7d63 38fd5a11820b4dc9a5260218a5033279 - default default] Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL. 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager [req-fc9d12c5-a89b-44d5-b3e4-e4fc91df698a a97e0067446443af9eac9fd60f4c7d63 38fd5a11820b4dc9a5260218a5033279 - default default] Faile d to copy volume 47320263-9431-45bf-be29-5e320555f53d to 301bec55-b1db-4bd9-8f52-f6096040fbf8 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager Traceback (most recent call last): 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1895, in _migrate_volume_generic 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager new_volume.id) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/compute/nova.py", line 181, in update_server_volume 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager new_volume_id) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/novaclient/v2/volumes.py", line 68, in update_server_volume 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager body, "volumeAttachment") 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 374, in _update 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager resp, body = self.api.client.put(url, body=body) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 196, in put 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager return self.request(url, 'PUT', **kwargs) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 168, in request 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager **kwargs) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 344, in request 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager resp = super(LegacyJsonAdapter, self).request(*args, **kwargs) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 112, in request 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager return self.session.request(url, method, **kwargs) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager return wrapped(*args, **kwargs) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 486, in request 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager auth_headers = self.get_auth_headers(auth) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 757, in get_auth_headers 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager return auth.get_headers(self, **kwargs) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/plugin.py", line 90, in get_headers 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager token = self.get_token(session) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 90, in get_token 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager return self.get_access(session).auth_token 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 136, in get_access 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager self.auth_ref = self.get_auth_ref(session) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 179, in get_auth_ref 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager self._plugin = self._do_create_plugin(session) 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 174, in _do_create_plugin 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager raise exceptions.DiscoveryFailure('Could not determine a suitable URL ' 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager DiscoveryFailure: Could not determine a suitable URL for the plugin 2018-11-15 23:52:17.452 221994 ERROR cinder.volume.manager This only occurs for some volume backends that are near their capacity limits. The volume backend with plenty of capacity work fine for volume creation and retyping (migrating). Version-Release number of selected component (if applicable): OSP 10 openstack-cinder-9.1.4-33.el7ost.noarch Thu Jun 28 13:49:37 2018 puppet-cinder-9.5.0-6.el7ost.noarch Thu Jun 28 13:56:17 2018 python-cinder-9.1.4-33.el7ost.noarch Thu Jun 28 13:49:27 2018 python-cinderclient-1.9.0-6.el7ost.noarch Thu Jun 28 13:43:56 2018 How reproducible: 100% in this environment Steps to Reproduce: Additional details to follow in bz notes.
Alan, Verification wise, boot instance attach volume and retype vol to other backend right? What I'm worries about is this bit -> This only occurs for some volume backends that are near their capacity limits. The volume backend with plenty of capacity work fine for volume creation and retyping (migrating). I'm thinking, deploy with LVM volume Create an NFS share on UC, add NFS as a second Cinder backend. Fill up the share with other data and then try to migrate? Any pitfalls I should worry about?
(In reply to Tzach Shefi from comment #25) > What I'm worries about is this bit -> > This only occurs for some volume backends that are near their capacity > limits. The volume backend with plenty of capacity work fine for volume > creation and retyping (migrating). > Hi, This information is incorrect and can be disregarded for this bz. It was originally believed to be related to capacity but later ruled out.
The "near their capacity limits" part is a red herring, see comment #3. The key part is that this BZ pertains retyping a volume when it's attached.
Tested on: openstack-tripleo-heat-templates-5.3.10-21.el7ost puppet-tripleo-5.6.8-19.el7ost puppet-cinder-9.5.0-7.el7ost Create an LVM backed volume: #cinder create 1 --image cirros --volume-type tripleo_iscsi --name lvm_vol +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | created_at | 2018-12-17T08:37:34.000000 | | id | 71da663f-84d9-4b73-b7a2-31827bbed5fa | | name | lvm_vol | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | Booted an instance from before, attach volume to instance: #nova volume-attach f83baa1c-1ecd-417c-82cd-229c0e026f16 71da663f-84d9-4b73-b7a2-31827bbed5fa auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 71da663f-84d9-4b73-b7a2-31827bbed5fa | | serverId | f83baa1c-1ecd-417c-82cd-229c0e026f16 | | volumeId | 71da663f-84d9-4b73-b7a2-31827bbed5fa | +----------+--------------------------------------+ Volume reaches attached state: #cinder list | grep 71da663f | 71da663f-84d9-4b73-b7a2-31827bbed5fa | in-use | lvm_vol | 1 | tripleo_iscsi | true | f83baa1c-1ecd-417c-82cd-229c0e026f16 | Retype attached volume to other backend: #cinder retype 71da663f-84d9-4b73-b7a2-31827bbed5fa k2 --migration-policy on-demand During migration: #cinder show 71da663f-84d9-4b73-b7a2-31827bbed5fa | created_at | 2018-12-17T08:37:34.000000 | id | 71da663f-84d9-4b73-b7a2-31827bbed5fa | migration_status | migrating | name | lvm_vol | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | os-vol-mig-status-attr:migstat | migrating | os-vol-mig-status-attr:name_id | None | os-vol-tenant-attr:tenant_id | a95cb0dd6fc7482292a46ea8a05d5d23 | size | 1 | status | retyping | updated_at | 2018-12-17T08:45:41.000000 | volume_type | tripleo_iscsi Waited a few minutes doesn't look line it's going to finish cinder list +--------------------------------------+----------------+---------+------+---------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------+---------+------+---------------+----------+--------------------------------------+ | 1849f69f-19ca-4756-9ccf-89d84bdb8056 | error_deleting | - | 1 | k2 | false | | | 71da663f-84d9-4b73-b7a2-31827bbed5fa | retyping | lvm_vol | 1 | tripleo_iscsi | true | f83baa1c-1ecd-417c-82cd-229c0e026f16 | | bfe35faa-85ce-43f9-83a0-e3130ef0a8ac | attaching | lvm_vol | 1 | k2 | true | | +--------------------------------------+----------------+---------+------+---------------+----------+--------------------------------------+ These look promissing as leads: /var/log/cinder/api.log:2018-12-17 08:52:57.245 31693 DEBUG cinder.api.openstack.wsgi [req-0b0692c3-10ce-42bd-be7b-cb637d569640 fecb91461651455aaf2332abd8dc2024 dc65aeb0b5174f9dbfa87ed6cbb564c1 - default default] Action body: {"os-migrate_volume_completion": {"new_volume": "bfe35faa-85ce-43f9-83a0-e3130ef0a8ac", "error": true}} get_method /usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:985 /var/log/cinder/api.log:2018-12-17 08:52:57.246 31693 DEBUG cinder.api.openstack.wsgi [req-0b0692c3-10ce-42bd-be7b-cb637d569640 fecb91461651455aaf2332abd8dc2024 dc65aeb0b5174f9dbfa87ed6cbb564c1 - default default] Action: 'action', calling method: <bound method VolumeAdminController._migrate_volume_completion of <cinder.api.contrib.admin_actions.VolumeAdminController object at 0x7f78f3a10290>>, body: {"os-migrate_volume_completion": {"new_volume": "bfe35faa-85ce-43f9-83a0-e3130ef0a8ac", "error": true}} _process_stack /usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:868 /var/log/cinder/scheduler.log:2018-12-17 08:37:34.668 36718 DEBUG oslo_db.sqlalchemy.engines [req-e06bc2c7-6c8b-4fb8-8894-24ffc17712ec d27672ee3f5546749333de4c1956aab2 a95cb0dd6fc7482292a46ea8a05d5d23 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261 /var/log/cinder/volume.log:2018-12-17 08:45:42.948 809701 DEBUG oslo_db.sqlalchemy.engines [req-d6a62c38-91e7-40d7-b2b3-d5f5c37daf41 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261 /var/log/cinder/volume.log:2018-12-17 08:52:57.440 809701 INFO cinder.volume.manager [req-0b0692c3-10ce-42bd-be7b-cb637d569640 fecb91461651455aaf2332abd8dc2024 dc65aeb0b5174f9dbfa87ed6cbb564c1 - default default] migrate_volume_completion is cleaning up an error for volume 71da663f-84d9-4b73-b7a2-31827bbed5fa (temporary volume bfe35faa-85ce-43f9-83a0-e3130ef0a8ac After waiting for a while I now see migration failed: #cinder list +--------------------------------------+----------------+-----------------------+------+---------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------+-----------------------+------+---------------+----------+--------------------------------------+ | 1849f69f-19ca-4756-9ccf-89d84bdb8056 | error_deleting | - | 1 | k2 | false | | | 71da663f-84d9-4b73-b7a2-31827bbed5fa | in-use | lvm_vol | 1 | tripleo_iscsi | true | f83baa1c-1ecd-417c-82cd-229c0e026f16 | [stack@undercloud-0 ~]$ cinder show 71da663f-84d9-4b73-b7a2-31827bbed5fa | attachments | [{u'server_id': u'f83baa1c-1ecd-417c-82cd-229c0e026f16', u'attachment_id': u'af632ad4-0454-4f0c-bfed-8591ab8dd495', u'attached_at': u'2018-12-17T08:42:20.000000', u'host_name': None, u'volume_id': u'71da663f-84d9-4b73-b7a2-31827bbed5fa', u'device': u'/dev/vdb', u'id': u'71da663f-84d9-4b73-b7a2-31827bbed5fa'}] | | created_at | 2018-12-17T08:37:34.000000 | | | id | 71da663f-84d9-4b73-b7a2-31827bbed5fa | | | migration_status | error | | name | lvm_vol | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | error Alan, I'll look at the logs right now. Will post any issues may need your help, as this failed to verify. Might be a backend/config issue. FYI an attached LVM (details below) volume migrated fine from LVM to K2 72c72936-2689-4a2a-8b1d-afb8bbeee264 | available | lvm-unattached-retype | 1 | k2 | false |
Created attachment 1515038 [details] Cinder logs
Verified on: openstack-tripleo-heat-templates-5.3.10-21.el7ost puppet-tripleo-5.6.8-19.el7ost puppet-cinder-9.5.0-7.el7ost Unsure what's up with K2 backend, that might explain issue I hit above. Any way an attached (empty) volume was successfully retyped from LVM to netapp. Booted a new instance: #nova boot inst2 --flavor tiny --image cirros --nic net-id=8ba42b26-1573-450e-8d60-dac81f42f0b6 Created a new lvm volume: #cinder create 1 --volume-type tripleo_iscsi --name lvm-vol2 +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-12-17T11:48:24.000000 | | description | None | | encrypted | False | | id | c0d4dd20-ed8c-4cb4-88f3-07952233ae84 | | metadata | {} | | migration_status | None | | multiattach | False | | name | lvm-vol2 | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | a95cb0dd6fc7482292a46ea8a05d5d23 | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2018-12-17T11:48:25.000000 | | user_id | d27672ee3f5546749333de4c1956aab2 | | volume_type | tripleo_iscsi | +--------------------------------+---------------------------------------+ Attached volume to instance: #nova volume-attach 7fbe8ae9-41ed-431b-9a4d-45740e465db1 c0d4dd20-ed8c-4cb4-88f3-07952233ae84 auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | c0d4dd20-ed8c-4cb4-88f3-07952233ae84 | | serverId | 7fbe8ae9-41ed-431b-9a4d-45740e465db1 | | volumeId | c0d4dd20-ed8c-4cb4-88f3-07952233ae84 | +----------+--------------------------------------+ Migrate an attached instance: #cinder retype c0d4dd20-ed8c-4cb4-88f3-07952233ae84 netapp --migration-policy on-demand #Cinder list | c0d4dd20-ed8c-4cb4-88f3-07952233ae84 | in-use | lvm-vol2 | 1 | netapp | false | 7fbe8ae9-41ed-431b-9a4d-45740e465db1 | Notice on netapp backend Also see status: migration_status -> success #cinder show c0d4dd20-ed8c-4cb4-88f3-07952233ae84 +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'7fbe8ae9-41ed-431b-9a4d-45740e465db1', u'attachment_id': u'ccdcafd8-9d8b-4bc8-85b9-00512f5a2d6b', u'attached_at': u'2018-12-17T11:51:04.000000', u'host_name': None, u'volume_id': u'c0d4dd20-ed8c-4cb4-88f3-07952233ae84', u'device': u'/dev/vdb', u'id': u'c0d4dd20-ed8c-4cb4-88f3-07952233ae84'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-12-17T11:48:24.000000 | | description | None | | encrypted | False | | id | c0d4dd20-ed8c-4cb4-88f3-07952233ae84 | | metadata | {u'readonly': u'False', u'attached_mode': u'rw'} | | migration_status | success | | multiattach | False | | name | lvm-vol2 | | os-vol-host-attr:host | hostgroup@netapp#rhos_cinder | | os-vol-mig-status-attr:migstat | success | | os-vol-mig-status-attr:name_id | 68e26092-a08e-40f0-96e7-4750b7368a24 | | os-vol-tenant-attr:tenant_id | a95cb0dd6fc7482292a46ea8a05d5d23 | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | in-use | | updated_at | 2018-12-17T11:51:05.000000 | | user_id | d27672ee3f5546749333de4c1956aab2 | | volume_type | netapp Now retry again with a volume from an image, what failed on K2 backend. #cinder create 1 --volume-type tripleo_iscsi --image cirros --name lvm_vol3 ->3b4543b9-4ddb-48be-b827-bc66fca8fa77 Attach to same inst2 #nova volume-attach 7fbe8ae9-41ed-431b-9a4d-45740e465db1 3b4543b9-4ddb-48be-b827-bc66fca8fa77 auto Retype this new attached vol from image #cinder retype 3b4543b9-4ddb-48be-b827-bc66fca8fa77 netapp --migration-policy on-demand | 3b4543b9-4ddb-48be-b827-bc66fca8fa77 | retyping | lvm_vol3 | 1 | tripleo_iscsi | true | 7fbe8ae9-41ed-431b-9a4d-45740e465db1 | -> | 3b4543b9-4ddb-48be-b827-bc66fca8fa77 | in-use | lvm_vol3 | 1 | netapp | true | 7fbe8ae9-41ed-431b-9a4d-45740e465db1 | Again looks good, OK to verify. Not sure why K2 backend failed, shouldn't stop verification. Maybe lacking backend support/issue. Alan, I've cleared need info, don't waste time on debugging K2 unless you want to go down that rabbit hole.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0055