Bug 1752560 - Cinder retype from non-type to new type on NFS deletes the Volume
Summary: Cinder retype from non-type to new type on NFS deletes the Volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 10.0 (Newton)
Hardware: All
OS: Linux
high
high
Target Milestone: z14
: 10.0 (Newton)
Assignee: Alan Bishop
QA Contact: Tzach Shefi
Chuck Copello
URL:
Whiteboard:
Depends On: 1740560 1749491 1749493
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-16 15:56 UTC by Alan Bishop
Modified: 2020-04-02 10:26 UTC (History)
6 users (show)

Fixed In Version: openstack-cinder-9.1.4-54.el7ost
Doc Type: Bug Fix
Doc Text:
In previous releases, the Block Storage NFS volume migration code renamed the volume file on the destination NFS server to match the name on the source NFS server. The volume migration process finished by deleting the source volume. However, when retyping an NFS volume using migration, the source and destination might be the same. When you retyped an NFS volume using migration, the NFS volume file would be inadvertently deleted if the source and destination back ends were on the same NFS server. With this release, retyping an NFS volume using migration works correctly. When you migrate an NFS volume, the volume file is renamed only when the source and destination files reside on different NFS servers.
Clone Of: 1740560
Environment:
Last Closed: 2020-04-02 10:26:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cinder and nova logs (1.01 MB, application/gzip)
2020-03-11 12:00 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1798468 0 None None None 2019-09-16 15:56:58 UTC
OpenStack gerrit 678278 0 'None' MERGED Fix NFS volume retype with migrate 2020-08-05 17:57:21 UTC
Red Hat Product Errata RHBA-2020:1301 0 None None None 2020-04-02 10:26:07 UTC

Description Alan Bishop 2019-09-16 15:56:59 UTC
+++ This bug was initially created as a clone of Bug #1740560 +++

Description of problem:
Migrating a Volume with a none-type to a volume type on the same Backend (NFS) ends up deleting the Volume.

Version-Release number of selected component (if applicable):
RHOSP 13.0.7
rhosp13/openstack-cinder-volume 13.0-79

How reproducible:
Always

Steps to Reproduce:
1.
openstack volume create --size 5 test-vol01
+---------------------+------------------------------------------------------------------+
| Field               | Value                                                            |
+---------------------+------------------------------------------------------------------+
| attachments         | []                                                               |
| availability_zone   | nova                                                             |
| bootable            | false                                                            |
| consistencygroup_id | None                                                             |
| created_at          | 2019-08-13T08:58:15.000000                                       |
| description         | None                                                             |
| encrypted           | False                                                            |
| id                  | ef237fb0-ea21-4881-8042-efcfbb308e9c                             |
| multiattach         | False                                                            |
| name                | test-vol01                                                       |
| properties          |                                                                  |
| replication_status  | None                                                             |
| size                | 5                                                                |
| snapshot_id         | None                                                             |
| source_volid        | None                                                             |
| status              | creating                                                         |
| type                | None                                                             |
| updated_at          | None                                                             |
| user_id             | 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a |
+---------------------+------------------------------------------------------------------+

2.
openstack volume type list
+--------------------------------------+--------+-----------+
| ID                                   | Name   | Is Public |
+--------------------------------------+--------+-----------+
| 8645458e-062e-4321-9103-cd656ca0cee6 | Legacy | True      |
+--------------------------------------+--------+-----------+
openstack volume type show Legacy
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| access_project_ids | None                                 |
| description        | Default Storage                      |
| id                 | 8645458e-062e-4321-9103-cd656ca0cee6 |
| is_public          | True                                 |
| name               | Legacy                               |
| properties         | volume_backend_name='tripleo_nfs'    |
| qos_specs_id       | None                                 |
+--------------------+--------------------------------------+

3.
openstack volume set --type Legacy test-vol01 
cinder-volume.log: ERROR oslo_messaging.rpc.server VolumeMigrationFailed: Volume migration failed: Retype requires migration but is not allowed.

4.
openstack volume set  --retype-policy on-demand --type Legacy test-vol01
openstack volume show test-vol01
+------------------------------+------------------------------------------------------------------+
| Field                        | Value                                                            |
+------------------------------+------------------------------------------------------------------+
| attachments                  | []                                                               |
| availability_zone            | nova                                                             |
| bootable                     | false                                                            |
| consistencygroup_id          | None                                                             |
| created_at                   | 2019-08-13T08:58:15.000000                                       |
| description                  | None                                                             |
| encrypted                    | False                                                            |
| id                           | ef237fb0-ea21-4881-8042-efcfbb308e9c                             |
| multiattach                  | False                                                            |
| name                         | test-vol01                                                       |
| os-vol-tenant-attr:tenant_id | dbe2fb6b113b418da018420b7bc88240                                 |
| properties                   |                                                                  |
| replication_status           | None                                                             |
| size                         | 5                                                                |
| snapshot_id                  | None                                                             |
| source_volid                 | None                                                             |
| status                       | available                                                        |
| type                         | Legacy                                                           |
| updated_at                   | 2019-08-13T09:02:25.000000                                       |
| user_id                      | 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a |
+------------------------------+------------------------------------------------------------------+

Actual results:
The volume is copied to a new ID but cinder ends up deleted both the copy and the original on the Backend.
It then still shows up in cinder but any usage fails (attaching, snapshots etc)

openstack volume snapshot create --volume test-vol01 test-snap01
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| created_at  | 2019-08-13T09:05:16.337204           |
| description | None                                 |
| id          | a63a55bd-396b-4597-ad37-18bf4e00a220 |
| name        | test-snap01                          |
| properties  |                                      |
| size        | 5                                    |
| status      | creating                             |
| updated_at  | None                                 |
| volume_id   | ef237fb0-ea21-4881-8042-efcfbb308e9c |
+-------------+--------------------------------------+

openstack volume snapshot show test-snap01
+--------------------------------------------+--------------------------------------+
| Field                                      | Value                                |
+--------------------------------------------+--------------------------------------+
| created_at                                 | 2019-08-13T09:05:16.000000           |
| description                                | None                                 |
| id                                         | a63a55bd-396b-4597-ad37-18bf4e00a220 |
| name                                       | test-snap01                          |
| os-extended-snapshot-attributes:progress   | 0%                                   |
| os-extended-snapshot-attributes:project_id | dbe2fb6b113b418da018420b7bc88240     |
| properties                                 |                                      |
| size                                       | 5                                    |
| status                                     | error                                |
| updated_at                                 | 2019-08-13T09:05:16.000000           |
| volume_id                                  | ef237fb0-ea21-4881-8042-efcfbb308e9c |
+--------------------------------------------+--------------------------------------+

2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server [req-8065c636-b6fe-4d08-80a4-42413a8fa6ee 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a dbe2fb6b113[5/24919]
420b7bc88240 - f0cab1f633da4ec99b5d2822c5abced5 f0cab1f633da4ec99b5d2822c5abced5] Exception during message handling: ProcessExecutionError: Unexpected error while running command.
Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C qemu-img info --force-share /var/lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c
Exit code: 1
Stdout: u''
Stderr: u"qemu-img: Could not open '/var/lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c': Could not open '/var/lib/cinder/mnt/8d3227ac12c6118
a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c': No such file or directory\n"
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "<string>", line 2, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/objects/cleanable.py", line 207, in wrapper
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     result = f(*args, **kwargs)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1096, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     snapshot.save()
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1088, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     model_update = self.driver.create_snapshot(snapshot)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "<string>", line 2, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/coordination.py", line 151, in _synchronized
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return f(*a, **k)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 566, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return self._create_snapshot(snapshot)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 1412, in _create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     new_snap_path)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 1246, in _do_create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     snapshot.volume.name)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 542, in _qemu_img_info
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     run_as_root=True)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 764, in _qemu_img_info_base
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     run_as_root=run_as_root)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/image/image_utils.py", line 111, in qemu_img_info
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     prlimit=QEMU_IMG_LIMITS)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 126, in execute
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return processutils.execute(*cmd, **kwargs)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     cmd=sanitized_cmd)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command.
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C qemu-img info --force-share /var/
lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Exit code: 1
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Stdout: u''

Expected results:
cinder retype should either just change the Volume type of the original Volume or keep the copied Volume.

Additional info:

cinder.conf backend:

[tripleo_nfs]
backend_host=hostgroup
volume_backend_name=tripleo_nfs
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/shares-nfs.conf
nfs_mount_options=
nfs_snapshot_support=True
nas_secure_file_operations=False
nas_secure_file_permissions=False
nfs_sparsed_volumes = True

--- Additional comment from Alan Bishop on 2019-08-20 20:28:07 UTC ---

Sorry for the delay, we're taking a look at this now.

--- Additional comment from Alan Bishop on 2019-08-20 22:04:12 UTC ---

I am not able to reproduce the problem (I running a newer version of cinder, but the remotefs driver hasn't changed significantly).

Please reproduce the problem with debug logging enabled, and attach the cinder logs (especially the cinder-volume log). I need to see the logs during the time the volume is migrated to the Legacy backend.

--- Additional comment from Johannes Beisiegel on 2019-08-21 08:39:35 UTC ---



--- Additional comment from Johannes Beisiegel on 2019-08-21 08:45:52 UTC ---

I was able to replicate the Issue on our Staging Environment with debug enabled.
Volume ID is a6277876-c126-4fa8-a3b9-0d99a83c07b5 and the temporary Volume ID for the retype is 7f64ffe7-348f-4dd9-93f0-6be71910bc90.

On the NFS Host I can observe the following:

# ls -lh
total 2
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-r--r-- 1 root  42436   5.0G Aug 21 10:20 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:20 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-rw-rw- 1 root  42436     0B Aug 21 10:20 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:21 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 0

--- Additional comment from Alan Bishop on 2019-08-23 04:30:46 UTC ---

I am now able to reproduce the problem, and am investigating its cause.

--- Additional comment from Alan Bishop on 2019-08-29 15:03:09 UTC ---

The fix has merged on master, and upstream backports are underway.

Comment 5 Tzach Shefi 2020-03-11 11:29:56 UTC
Alan, 
Gist of things retype seams to work, however attach/clone of retyped vol fails. 


Tested on:
openstack-cinder-9.1.4-53.el7ost.noarch


On Cinder backed by NFS share

Create a volume:

[stack@undercloud-0 ~]$ cinder create 5 --name EmptyVolType
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2020-03-11T10:30:53.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 12a2dc46-687c-41c9-a390-49cddd77bb3e |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | EmptyVolType                         |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 36d3464abfe0476cad8f668b2261cd8e     |
| replication_status             | disabled                             |
| size                           | 5                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | bb83a7a259dc444ebf4da26964d011bc     |
| volume_type                    | None                                 |  -> Volume type is the default "none" / not set
+--------------------------------+--------------------------------------+

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 12a2dc46-687c-41c9-a390-49cddd77bb3e | available | EmptyVolType | 5    | -           | false    |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Create a Legacy volume type or same nfs 

[stack@undercloud-0 ~]$ cinder type-create Legacy
+--------------------------------------+--------+-------------+-----------+
| ID                                   | Name   | Description | Is_Public |
+--------------------------------------+--------+-------------+-----------+
| 13867add-bb3f-4eed-b6c2-976ef2dd9ca3 | Legacy | -           | True      |
+--------------------------------------+--------+-------------+-----------+

[stack@undercloud-0 ~]$ cinder type-key Legacy set volume_backend_name=nfs
[stack@undercloud-0 ~]$ cinder extra-specs-list
+--------------------------------------+--------+----------------------------------+
| ID                                   | Name   | extra_specs                      |
+--------------------------------------+--------+----------------------------------+
| 13867add-bb3f-4eed-b6c2-976ef2dd9ca3 | Legacy | {u'volume_backend_name': u'nfs'} |
+--------------------------------------+--------+----------------------------------+

Now lets retype the volume:

[stack@undercloud-0 ~]$ cinder list                                                                                                                                                                                                          
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+                                                                                                                            
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |                                                                                                                            
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+                                                                                                                            
| 12a2dc46-687c-41c9-a390-49cddd77bb3e | available | EmptyVolType | 5    | -           | false    |             |                                                                                                                            
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+                                                                                                                            
[stack@undercloud-0 ~]$ cinder retype 12a2dc46-687c-41c9-a390-49cddd77bb3e Legacy --migration-policy on-demand
[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 12a2dc46-687c-41c9-a390-49cddd77bb3e | retyping  | EmptyVolType | 5    | -           | false    |             |
| 25977f64-829b-4abc-a60b-7f95fdd857c1 | available | EmptyVolType | 5    | Legacy      | false    |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 12a2dc46-687c-41c9-a390-49cddd77bb3e | available | EmptyVolType | 5    | Legacy      | false    |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


On first glance this looks OK to verify
An NFS volume of type "none"/none set was successfully retyped to Legacy volume type (same NFS server)

[stack@undercloud-0 ~]$ cinder show 12a2dc46-687c-41c9-a390-49cddd77bb3e
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2020-03-11T10:30:53.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 12a2dc46-687c-41c9-a390-49cddd77bb3e |
| metadata                       | {}                                   |
| migration_status               | success                              |
| multiattach                    | False                                |
| name                           | EmptyVolType                         |
| os-vol-host-attr:host          | hostgroup@nfs#nfs                    |
| os-vol-mig-status-attr:migstat | success                              |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 36d3464abfe0476cad8f668b2261cd8e     |
| replication_status             | disabled                             |
| size                           | 5                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| updated_at                     | 2020-03-11T10:55:20.000000           |
| user_id                        | bb83a7a259dc444ebf4da26964d011bc     |
| volume_type                    | Legacy                               |
+--------------------------------+--------------------------------------+

But that retyped volume fails to attach/get cloned. 
If I recall OSP10 didn't support NFS snapshots (added later)

Lets try to attach volume

[stack@undercloud-0 ~]$ nova list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 192ff28f-b592-4b4e-9c42-69d3beedca5c | inst1 | ACTIVE | -          | Running     | internal=192.168.0.16, 10.0.0.218 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
[stack@undercloud-0 ~]$ nova volume-attach 192ff28f-b592-4b4e-9c42-69d3beedca5c 12a2dc46-687c-41c9-a390-49cddd77bb3e auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 12a2dc46-687c-41c9-a390-49cddd77bb3e |
| serverId | 192ff28f-b592-4b4e-9c42-69d3beedca5c |
| volumeId | 12a2dc46-687c-41c9-a390-49cddd77bb3e |
+----------+--------------------------------------+
So this is odd volume failed to attach 
Compute logs
/var/log/nova/nova-compute.log:25098:2020-03-11 11:06:31.639 44544 ERROR nova.virt.libvirt.driver [req-de56387d-a625-4ddd-bff0-031e8f1a09be bb83a7a259dc444ebf4da26964d011bc 36d3464abfe0476cad8f668b2261cd8e - - -] [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c] Failed to attach volume at mountpoint: /dev/vdb


Lets use original volume as source for a new volume
[stack@undercloud-0 ~]$ cinder create 5  --source-volid 12a2dc46-687c-41c9-a390-49cddd77bb3e --name ClonedVolB
+--------------------------------+--------------------------------------+                                                                                                                                                                    
| Property                       | Value                                |                                                                                                                                                                    
+--------------------------------+--------------------------------------+                                                                                                                                                                    
| attachments                    | []                                   |                                                                                                                                                                    
| availability_zone              | nova                                 |                                                                                                                                                                    
| bootable                       | false                                |                                                                                                                                                                    
| consistencygroup_id            | None                                 |                                                                                                                                                                    
| created_at                     | 2020-03-11T11:20:18.000000           |                                                                                                                                                                    
| description                    | None                                 |                                                                                                                                                                    
| encrypted                      | False                                |                                                                                                                                                                    
| id                             | 7eebc70f-36bc-46c2-8d19-239d05810690 |                                                                                                                                                                    
| metadata                       | {}                                   |                                                                                                                                                                    
| migration_status               | None                                 |                                                                                                                                                                    
| multiattach                    | False                                |                                                                                                                                                                    
| name                           | ClonedVolB                           |                                                                                                                                                                    
| os-vol-host-attr:host          | hostgroup@nfs#nfs                    |                                                                                                                                                                    
| os-vol-mig-status-attr:migstat | None                                 |                                                                                                                                                                    
| os-vol-mig-status-attr:name_id | None                                 |                                                                                                                                                                    
| os-vol-tenant-attr:tenant_id   | 36d3464abfe0476cad8f668b2261cd8e     |                                                                                                                                                                    
| replication_status             | disabled                             |                                                                                                                                                                    
| size                           | 5                                    |                                                                                                                                                                    
| snapshot_id                    | None                                 |
| source_volid                   | 12a2dc46-687c-41c9-a390-49cddd77bb3e |
| status                         | creating                             |
| updated_at                     | 2020-03-11T11:20:18.000000           |
| user_id                        | bb83a7a259dc444ebf4da26964d011bc     |
| volume_type                    | Legacy                               |
+--------------------------------+--------------------------------------+

This too failed 

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 12a2dc46-687c-41c9-a390-49cddd77bb3e | available | EmptyVolType | 5    | Legacy      | false    |             |
| 7eebc70f-36bc-46c2-8d19-239d05810690 | error     | ClonedVolB   | 5    | Legacy      | false    |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Lets call in the cavalry/dev,
odd I had just verified osp15 clone of ths bz, attach worked fine, so maybe just an osp10 issue?
https://bugzilla.redhat.com/show_bug.cgi?id=1749491#c5

Comment 7 Tzach Shefi 2020-03-11 12:00:17 UTC
Created attachment 1669237 [details]
cinder and nova logs

Clone fail
c-vol 

2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager Traceback (most recent call last):
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager     result = task.execute(**arguments)
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 931, in execute
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager     context, volume, **volume_spec)
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 493, in _create_from_source_volume
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager     model_update = self.driver.create_cloned_volume(volume, srcvol_ref)
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/driver.py", line 1535, in create_cloned_volume
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager     raise NotImplementedError()
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager NotImplementedError
2020-03-11 11:20:19.481 44251 ERROR cinder.volume.manager 
2020-03-11 11:20:19.493 44251 DEBUG cinder.volume.manager [req-847234a9-b02e-4ad1-ab30-7f5db770a7f6 bb83a7a259dc444ebf4da26964d011bc 36d3464abfe0476cad8f668b2261cd8e - default default] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (4d2faa81-31d8-439c-8a0b-2b73a1342c4a) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:189


Nova compute attach fail
 25534 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [req-6d748059-916d-48b6-b831-8067e388fe07 bb83a7a259dc444ebf4da26964d011bc 36d3464abf        e0476cad8f668b2261cd8e - - -] [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c] Driver failed to attach volume 12a2dc46-687c-41c9-a390-49cddd77bb        3e at /dev/vdb
  25535 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c] Traceback (most recent call last):
  25536 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/nova/virt/block_device.py", line 274, in attach
  25537 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     device_type=self['device_type'],         encryption=encryption)
  25538 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/nova/virt/libvirt/driver.py", line 1240, in attach_volume
  25539 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     self._disconnect_volume(connecti        on_info, disk_dev, instance)
  25540 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/oslo_utils/excutils.py", line 220, in __exit__
  25541 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     self.force_reraise()
  25542 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/oslo_utils/excutils.py", line 196, in force_reraise
  25543 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     six.reraise(self.type_, self.val        ue, self.tb)
  25544 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/nova/virt/libvirt/driver.py", line 1228, in attach_volume
  25545 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     guest.attach_device(conf, persis        tent=True, live=live)
  25546 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/nova/virt/libvirt/guest.py", line 304, in attach_device
  25547 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     self._domain.attachDeviceFlags(d        evice_xml, flags=flags)
  25548 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/eventlet/tpool.py", line 186, in doit
  25549 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     result = proxy_call(self._autowr        ap, f, *args, **kwargs)
  25550 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/eventlet/tpool.py", line 144, in proxy_call
  25551 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     rv = execute(f, *args, **kwargs)
  25552 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/eventlet/tpool.py", line 125, in execute
  25553 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     six.reraise(c, e, tb)
  25554 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib/python2.7/site-pack        ages/eventlet/tpool.py", line 83, in tworker
  25555 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     rv = meth(*args, **kwargs)
  25556 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]   File "/usr/lib64/python2.7/site-pa        ckages/libvirt.py", line 605, in attachDeviceFlags
  25557 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]     if ret == -1: raise libvirtError         ('virDomainAttachDeviceFlags() failed', dom=self)
  25558 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c] libvirtError: Cannot access storage         file '/var/lib/nova/mnt/47266020eacec99097bdec49f2451d38/volume-12a2dc46-687c-41c9-a390-49cddd77bb3e': No such file or directory
  25559 2020-03-11 11:08:31.703 44544 ERROR nova.virt.block_device [instance: 192ff28f-b592-4b4e-9c42-69d3beedca5c]

Comment 8 Alan Bishop 2020-03-11 14:09:29 UTC
Hi Tzach,

Your post-retype observations are interesting, but should be treated separately from this BZ and its fix. The scope of this BZ is to ensure the retype operation doesn't result in data loss (i.e. the underlying volume being deleted).

It looks like the clone failure is due to lack of snapshot support in newton/OSP-10.

The nova attach issue is not great, but I don't know if this is a bug in newton or if it's associated with a "feature" in a post-newton release. The core issue is retyping a volume results in it being assigned a new UUID internally, with metadata that maps it to its original UUID. Here, it seems nova is trying to use the original UUID without looking up its mapping.

Comment 9 Tzach Shefi 2020-03-11 14:39:53 UTC
Verified based on #5 and #8
Original bz issue was fixed, works as expected.

For that new found Nova problem reported a new bz: 
https://bugzilla.redhat.com/show_bug.cgi?id=1812541

Comment 11 errata-xmlrpc 2020-04-02 10:26:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1301


Note You need to log in before you can comment on or make changes to this bug.