Bug 1835870
| Summary: | [cinder] retype fails to change the name of intermediate volume after migrating data to it in 3par | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Kamal Bhaskar <kbhaskar> | |
| Component: | openstack-cinder | Assignee: | Alan Bishop <abishop> | |
| Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> | |
| Severity: | urgent | Docs Contact: | Chuck Copello <ccopello> | |
| Priority: | urgent | |||
| Version: | 13.0 (Queens) | CC: | abishop, dhill, eharney, geguileo, jamsmith, jhardee, jvisser, kbhaskar, kthakre, ltoscano, mgarciac, nweinber, pgrist, scohen, tshefi | |
| Target Milestone: | z12 | Keywords: | Triaged, ZStream | |
| Target Release: | 13.0 (Queens) | Flags: | tshefi:
automate_bug-
|
|
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-cinder-12.0.10-11.el7ost | Doc Type: | Bug Fix | |
| Doc Text: |
This update fixes a bug that caused Cinder offline volume migration for HPE 3par storage to fail.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1845631 (view as bug list) | Environment: | ||
| Last Closed: | 2020-06-24 11:51:47 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1845631, 1845632 | |||
| Bug Blocks: | ||||
|
Description
Kamal Bhaskar
2020-05-14 16:17:19 UTC
We also see this exception when we try to initialize the connection of the migrated volume.
~~~
2020-05-14 16:13:05.484 46 DEBUG hpe3parclient.http [req-6af8c8be-9f73-4077-856f-9d9f27d6b45c d93d94d0eba0474294590ba2d7557b8e 869bf59fe6e743ed94b96702a3a67bcd - default default] RESP BODY:
_http_log_resp /usr/lib/python2.7/site-packages/hpe3parclient/http.py:185
2020-05-14 16:13:05.485 46 DEBUG cinder.coordination [req-6af8c8be-9f73-4077-856f-9d9f27d6b45c d93d94d0eba0474294590ba2d7557b8e 869bf59fe6e743ed94b96702a3a67bcd - default default] Lock "/var/lib/cinder/cinder-3par-e1e641ff-fd7b-48fa-9203-cf0f2f091ae2" released by "initialize_connection" :: held 8.593s _synchronized /usr/lib/python2.7/site-packages/cinder/coordination.py:162
2020-05-14 16:13:05.485 46 DEBUG cinder.volume.drivers.hpe.hpe_3par_fc [req-6af8c8be-9f73-4077-856f-9d9f27d6b45c d93d94d0eba0474294590ba2d7557b8e 869bf59fe6e743ed94b96702a3a67bcd - default default] <== decorator: exception (8594ms) HTTPNotFound() trace_logging_wrapper /usr/lib/python2.7/site-packages/cinder/utils.py:924
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager [req-6af8c8be-9f73-4077-856f-9d9f27d6b45c d93d94d0eba0474294590ba2d7557b8e 869bf59fe6e743ed94b96702a3a67bcd - default default] Driver initialize connection failed (error: Not found (HTTP 404) 23 - volume does not exist).: HTTPNotFound: Not found (HTTP 404) 23 - volume does not exist
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager Traceback (most recent call last):
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 4375, in _connection_create
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager conn_info = self.driver.initialize_connection(volume, connector)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 918, in trace_logging_wrapper
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager result = f(*args, **kwargs)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/zonemanager/utils.py", line 80, in decorator
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager conn_info = initialize_connection(self, *args, **kwargs)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "<string>", line 2, in initialize_connection
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/coordination.py", line 151, in _synchronized
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager return f(*a, **k)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/hpe/hpe_3par_fc.py", line 175, in initialize_connection
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager host = self._create_host(common, volume, connector)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/hpe/hpe_3par_fc.py", line 373, in _create_host
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager cpg = common.get_cpg(volume, allowSnap=True)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/hpe/hpe_3par_common.py", line 1957, in get_cpg
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager vol = self.client.getVolume(volume_name)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/hpe3parclient/client.py", line 464, in getVolume
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager response, body = self.http.get('/volumes/%s' % name)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/hpe3parclient/http.py", line 352, in get
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager return self._cs_request(url, 'GET', **kwargs)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/hpe3parclient/http.py", line 321, in _cs_request
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager **kwargs)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/hpe3parclient/http.py", line 297, in _time_request
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager resp, body = self.request(url, method, **kwargs)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/hpe3parclient/http.py", line 262, in request
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager raise exceptions.from_response(resp, body)
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager HTTPNotFound: Not found (HTTP 404) 23 - volume does not exist
2020-05-14 16:13:05.487 46 ERROR cinder.volume.manager
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server [req-6af8c8be-9f73-4077-856f-9d9f27d6b45c d93d94d0eba0474294590ba2d7557b8e 869bf59fe6e743ed94b96702a3a67bcd - default default] Exception during message handling: VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Driver initialize connection failed (error: Not found (HTTP 404) 23 - volume does not exist).
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 4422, in attachment_update
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server connector)
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 4381, in _connection_create
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server raise exception.VolumeBackendAPIException(data=err_msg)
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Driver initialize connection failed (error: Not found (HTTP 404) 23 - volume does not exist).
2020-05-14 16:13:05.498 46 ERROR oslo_messaging.rpc.server
~~~
Verified on:
openstack-cinder-12.0.10-11.el7ost.noarch
My own system failed to deploy in time, used a loaner system which doesn't have FC access.
Alan mentioned fix isn't protocol specific, so shouldn't be a problem as long as I use a netapp/3par migration.
Create a netapp backed volume:
(overcloud) [stack@undercloud-0 ~]$ cinder create 1 --image cirros --volume-type netapp --name vol1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2020-06-08T01:33:42.000000 |
| description | None |
| encrypted | False |
| id | 524ddd89-4a11-4d82-90b2-7017165f8574 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | vol1 |
| os-vol-host-attr:host | hostgroup@netapp#rhos_infra_tlv2 | -> netapp backed volume
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 4a68f4e940d146b594a2eb56256f67e9 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | 2020-06-08T01:33:42.000000 |
| user_id | 04f1e8ba7a8844f58e0917d8ffb4c706 |
| volume_type | netapp |
+--------------------------------+--------------------------------------+
Volume is avaliable:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 524ddd89-4a11-4d82-90b2-7017165f8574 | available | vol1 | 1 | netapp | true | |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
Create a clone of the volume still in netapp
(overcloud) [stack@undercloud-0 ~]$ cinder create 1 --source-volid 524ddd89-4a11-4d82-90b2-7017165f8574 --name netappclone
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | true |
| consistencygroup_id | None |
| created_at | 2020-06-08T01:42:30.000000 |
| description | None |
| encrypted | False |
| id | ca4074df-8d44-4243-abec-d8b753111836 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | netappclone |
| os-vol-host-attr:host | hostgroup@netapp#rhos_infra_tlv2 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 4a68f4e940d146b594a2eb56256f67e9 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | 524ddd89-4a11-4d82-90b2-7017165f8574 |
| status | creating |
| updated_at | 2020-06-08T01:42:30.000000 |
| user_id | 04f1e8ba7a8844f58e0917d8ffb4c706 |
| volume_type | netapp |
+--------------------------------+--------------------------------------+
Both volumes are available
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| 524ddd89-4a11-4d82-90b2-7017165f8574 | available | vol1 | 1 | netapp | true | |
| ca4074df-8d44-4243-abec-d8b753111836 | available | netappclone | 1 | netapp | true | |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
Create a VM from the cloned volume, whilst vol is still netapp backed.
(overcloud) [stack@undercloud-0 ~]$ openstack server create Test-VM-before-migrate --volume netappclone --availability-zone nova --wait --flavor small
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | compute-0.redhat.local |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.redhat.local |
| OS-EXT-SRV-ATTR:instance_name | instance-00000002 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2020-06-08T01:52:09.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=10.0.0.247, 2620:52:0:13b8::1000:74 |
| adminPass | e8Lh8WDjhGNQ |
| config_drive | |
| created | 2020-06-08T01:51:56Z |
| flavor | small (fc7a8157-ed4f-4597-8f48-6d2910866cd1) |
| hostId | b6f72a27c8accf8fd6234e1cc7674c158d649cc0759883c7891706b7 |
| id | 7d80d55d-2955-44cd-9bbe-657ed9b5cd8f |
| image | |
| key_name | None |
| name | Test-VM-before-migrate |
| progress | 0 |
| project_id | 4a68f4e940d146b594a2eb56256f67e9 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2020-06-08T01:52:09Z |
| user_id | 04f1e8ba7a8844f58e0917d8ffb4c706 |
| volumes_attached | id='ca4074df-8d44-4243-abec-d8b753111836' |
+-------------------------------------+----------------------------------------------------------+
Instance is up/active ^
Now delete the instance
(overcloud) [stack@undercloud-0 ~]$ nova delete 7d80d55d-2955-44cd-9bbe-657ed9b5cd8f
Request to delete server 7d80d55d-2955-44cd-9bbe-657ed9b5cd8f has been accepted.
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| 524ddd89-4a11-4d82-90b2-7017165f8574 | available | vol1 | 1 | netapp | true | |
| ca4074df-8d44-4243-abec-d8b753111836 | available | netappclone | 1 | netapp | true | |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
Cinder volume is available, retype the volume to 3par backend:
(overcloud) [stack@undercloud-0 ~]$ cinder retype --migration-policy on-demand ca4074df-8d44-4243-abec-d8b753111836 3par
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| 0ac13849-6c8f-426c-b3fd-a25e42d69720 | available | netappclone | 1 | 3par | true | |
| 524ddd89-4a11-4d82-90b2-7017165f8574 | available | vol1 | 1 | netapp | true | |
| ca4074df-8d44-4243-abec-d8b753111836 | retyping | netappclone | 1 | netapp | true | |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| 524ddd89-4a11-4d82-90b2-7017165f8574 | available | vol1 | 1 | netapp | true | |
| ca4074df-8d44-4243-abec-d8b753111836 | available | netappclone | 1 | 3par | true | |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
Retype completed, lets verify cloned volume resides on 3par backend now:
(overcloud) [stack@undercloud-0 ~]$ cinder show ca4074df-8d44-4243-abec-d8b753111836
+--------------------------------+-------------------------------------------------+
| Property | Value |
+--------------------------------+-------------------------------------------------+
| attached_servers | [] |
| attachment_ids | [] |
| availability_zone | nova |
| bootable | true |
| consistencygroup_id | None |
| created_at | 2020-06-08T01:42:30.000000 |
| description | None |
| encrypted | False |
| id | ca4074df-8d44-4243-abec-d8b753111836 |
| metadata | |
| migration_status | success |
| multiattach | False |
| name | netappclone |
| os-vol-host-attr:host | controller-0@3par#SSD_r5 | -> confirm volume migrated to 3par.
| os-vol-mig-status-attr:migstat | success |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 4a68f4e940d146b594a2eb56256f67e9 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | 524ddd89-4a11-4d82-90b2-7017165f8574 |
| status | available |
| updated_at | 2020-06-08T01:57:53.000000 |
| user_id | 04f1e8ba7a8844f58e0917d8ffb4c706 |
| volume_image_metadata | checksum : 1d3062cd89af34e419f7100277f38b2b |
| | container_format : bare |
| | disk_format : qcow2 |
| | image_id : 979a90b4-1bb5-4a93-8756-c1b619d54de9 |
| | image_name : cirros |
| | min_disk : 0 |
| | min_ram : 0 |
| | size : 16338944 |
| volume_type | 3par | -> vol type changed.
+--------------------------------+-------------------------------------------------+
Now lets try to boot a new vm from the migrated 3par backed volume:
(overcloud) [stack@undercloud-0 ~]$ openstack server create Test-VM-after-migrate --volume ca4074df-8d44-4243-abec-d8b753111836 --availability-zone nova --wait --flavor small
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | compute-0.redhat.local |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.redhat.local |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2020-06-08T02:01:11.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=10.0.0.223, 2620:52:0:13b8::1000:99 |
| adminPass | Vi2U8dZh2eHe |
| config_drive | |
| created | 2020-06-08T02:00:52Z |
| flavor | small (fc7a8157-ed4f-4597-8f48-6d2910866cd1) |
| hostId | b6f72a27c8accf8fd6234e1cc7674c158d649cc0759883c7891706b7 |
| id | 92716ba5-f59b-4789-bcaf-18501d638a3d |
| image | |
| key_name | None |
| name | Test-VM-after-migrate |
| progress | 0 |
| project_id | 4a68f4e940d146b594a2eb56256f67e9 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE | -> instance is up
| updated | 2020-06-08T02:01:12Z |
| user_id | 04f1e8ba7a8844f58e0917d8ffb4c706 |
| volumes_attached | id='ca4074df-8d44-4243-abec-d8b753111836' |
+-------------------------------------+----------------------------------------------------------+
Volume in use:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+
| 524ddd89-4a11-4d82-90b2-7017165f8574 | available | vol1 | 1 | netapp | true | |
| ca4074df-8d44-4243-abec-d8b753111836 | in-use | netappclone | 1 | 3par | true | 92716ba5-f59b-4789-bcaf-18501d638a3d |
+--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+
Verified as working, a new instance was successfully boot from a volume which we first cloned on netapp then migrated to 3par.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2722 |