+++ This bug was initially created as a clone of Bug #1740560 +++ Description of problem: Migrating a Volume with a none-type to a volume type on the same Backend (NFS) ends up deleting the Volume. Version-Release number of selected component (if applicable): RHOSP 13.0.7 rhosp13/openstack-cinder-volume 13.0-79 How reproducible: Always Steps to Reproduce: 1. openstack volume create --size 5 test-vol01 +---------------------+------------------------------------------------------------------+ | Field | Value | +---------------------+------------------------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-08-13T08:58:15.000000 | | description | None | | encrypted | False | | id | ef237fb0-ea21-4881-8042-efcfbb308e9c | | multiattach | False | | name | test-vol01 | | properties | | | replication_status | None | | size | 5 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a | +---------------------+------------------------------------------------------------------+ 2. openstack volume type list +--------------------------------------+--------+-----------+ | ID | Name | Is Public | +--------------------------------------+--------+-----------+ | 8645458e-062e-4321-9103-cd656ca0cee6 | Legacy | True | +--------------------------------------+--------+-----------+ openstack volume type show Legacy +--------------------+--------------------------------------+ | Field | Value | +--------------------+--------------------------------------+ | access_project_ids | None | | description | Default Storage | | id | 8645458e-062e-4321-9103-cd656ca0cee6 | | is_public | True | | name | Legacy | | properties | volume_backend_name='tripleo_nfs' | | qos_specs_id | None | +--------------------+--------------------------------------+ 3. openstack volume set --type Legacy test-vol01 cinder-volume.log: ERROR oslo_messaging.rpc.server VolumeMigrationFailed: Volume migration failed: Retype requires migration but is not allowed. 4. openstack volume set --retype-policy on-demand --type Legacy test-vol01 openstack volume show test-vol01 +------------------------------+------------------------------------------------------------------+ | Field | Value | +------------------------------+------------------------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-08-13T08:58:15.000000 | | description | None | | encrypted | False | | id | ef237fb0-ea21-4881-8042-efcfbb308e9c | | multiattach | False | | name | test-vol01 | | os-vol-tenant-attr:tenant_id | dbe2fb6b113b418da018420b7bc88240 | | properties | | | replication_status | None | | size | 5 | | snapshot_id | None | | source_volid | None | | status | available | | type | Legacy | | updated_at | 2019-08-13T09:02:25.000000 | | user_id | 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a | +------------------------------+------------------------------------------------------------------+ Actual results: The volume is copied to a new ID but cinder ends up deleted both the copy and the original on the Backend. It then still shows up in cinder but any usage fails (attaching, snapshots etc) openstack volume snapshot create --volume test-vol01 test-snap01 +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2019-08-13T09:05:16.337204 | | description | None | | id | a63a55bd-396b-4597-ad37-18bf4e00a220 | | name | test-snap01 | | properties | | | size | 5 | | status | creating | | updated_at | None | | volume_id | ef237fb0-ea21-4881-8042-efcfbb308e9c | +-------------+--------------------------------------+ openstack volume snapshot show test-snap01 +--------------------------------------------+--------------------------------------+ | Field | Value | +--------------------------------------------+--------------------------------------+ | created_at | 2019-08-13T09:05:16.000000 | | description | None | | id | a63a55bd-396b-4597-ad37-18bf4e00a220 | | name | test-snap01 | | os-extended-snapshot-attributes:progress | 0% | | os-extended-snapshot-attributes:project_id | dbe2fb6b113b418da018420b7bc88240 | | properties | | | size | 5 | | status | error | | updated_at | 2019-08-13T09:05:16.000000 | | volume_id | ef237fb0-ea21-4881-8042-efcfbb308e9c | +--------------------------------------------+--------------------------------------+ 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server [req-8065c636-b6fe-4d08-80a4-42413a8fa6ee 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a dbe2fb6b113[5/24919] 420b7bc88240 - f0cab1f633da4ec99b5d2822c5abced5 f0cab1f633da4ec99b5d2822c5abced5] Exception during message handling: ProcessExecutionError: Unexpected error while running command. Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C qemu-img info --force-share /var/lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c Exit code: 1 Stdout: u'' Stderr: u"qemu-img: Could not open '/var/lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c': Could not open '/var/lib/cinder/mnt/8d3227ac12c6118 a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c': No such file or directory\n" 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "<string>", line 2, in create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/objects/cleanable.py", line 207, in wrapper 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1096, in create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server snapshot.save() 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server self.force_reraise() 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1088, in create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server model_update = self.driver.create_snapshot(snapshot) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "<string>", line 2, in create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/coordination.py", line 151, in _synchronized 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server return f(*a, **k) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 566, in create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server return self._create_snapshot(snapshot) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 1412, in _create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server new_snap_path) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 1246, in _do_create_snapshot 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server snapshot.volume.name) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 542, in _qemu_img_info 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server run_as_root=True) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 764, in _qemu_img_info_base 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server run_as_root=run_as_root) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/image/image_utils.py", line 111, in qemu_img_info 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server prlimit=QEMU_IMG_LIMITS) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 126, in execute 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server return processutils.execute(*cmd, **kwargs) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server cmd=sanitized_cmd) 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command. 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C qemu-img info --force-share /var/ lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Exit code: 1 2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Stdout: u'' Expected results: cinder retype should either just change the Volume type of the original Volume or keep the copied Volume. Additional info: cinder.conf backend: [tripleo_nfs] backend_host=hostgroup volume_backend_name=tripleo_nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_shares_config=/etc/cinder/shares-nfs.conf nfs_mount_options= nfs_snapshot_support=True nas_secure_file_operations=False nas_secure_file_permissions=False nfs_sparsed_volumes = True
Targeting for 15z3 to give QE time to verify the fix in OSP-15, though the patch will ship in 15z2.
According to our records, this should be resolved by openstack-cinder-14.0.4-0.20200107100455.a59c01e.el8ost. This build is available now.
Verified on: openstack-cinder-14.0.4-0.20200107100455.a59c01e.el8ost 1. Create a "none" type volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 1 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-03-09T14:10:54.000000 | | description | None | | encrypted | False | | id | 9ae49b89-fb6c-4775-804e-0b5bf916619b | | metadata | {} | | migration_status | None | | multiattach | False | | name | None | | os-vol-host-attr:host | controller-1@nfs#nfs | -> volume is NFS backed | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | f712a2330cc946cbb2a97de1c3ae4408 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2020-03-09T14:10:55.000000 | | user_id | 269e4b3bcb764befb9183ff717b71204 | --> volume type is empty/not set. | volume_type | None | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 9ae49b89-fb6c-4775-804e-0b5bf916619b | available | - | 1 | - | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ openstack volume type list +--------------------------------------+---------+-----------+ | ID | Name | Is Public | +--------------------------------------+---------+-----------+ | 9628c8d8-3429-45c0-a924-d7b70f18fdff | Legacy | True | | 84450e60-f641-4807-8563-fb6f2e6dc459 | tripleo | True | -> default type, was disabled on cinder.conf. +--------------------------------------+---------+-----------+ (overcloud) [stack@undercloud-0 ~]$ openstack volume set --retype-policy on-demand --type Legacy 9ae49b89-fb6c-4775-804e-0b5bf916619b (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 9798772d-f6ea-47f4-a3ac-bdba3b37d87a | available | - | 1 | Legacy | false | | | 9ae49b89-fb6c-4775-804e-0b5bf916619b | retyping | - | 1 | - | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | + (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 9ae49b89-fb6c-4775-804e-0b5bf916619b | available | - | 1 | Legacy | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ cinder show 9ae49b89-fb6c-4775-804e-0b5bf916619b +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-03-09T14:10:54.000000 | | description | None | | encrypted | False | | id | 9ae49b89-fb6c-4775-804e-0b5bf916619b | | metadata | | | migration_status | success | | multiattach | False | | name | None | | os-vol-host-attr:host | controller-1@nfs#nfs | -> initial backend/type | os-vol-mig-status-attr:migstat | success | | os-vol-mig-status-attr:name_id | 9798772d-f6ea-47f4-a3ac-bdba3b37d87a | | os-vol-tenant-attr:tenant_id | f712a2330cc946cbb2a97de1c3ae4408 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2020-03-09T14:21:30.000000 | | user_id | 269e4b3bcb764befb9183ff717b71204 | | volume_type | Legacy | -> Good its working as expected. +--------------------------------+--------------------------------------+ When using NFS with two volumes types "none"\empty plus the second type "Legacy" also backed by NFS. I was able to migrate from "none" to Legacy. We are left in the end with a migrated copy. Also this migrated volume is accessible (overcloud) [stack@undercloud-0 ~]$ openstack volume snapshot create --volume 9ae49b89-fb6c-4775-804e-0b5bf916619b test-snap01 +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2020-03-09T14:34:07.486870 | | description | None | | id | f4776ffc-a12c-448b-9d44-6231be75f9fd | | name | test-snap01 | | properties | | | size | 1 | | status | creating | | updated_at | None | | volume_id | 9ae49b89-fb6c-4775-804e-0b5bf916619b | +-------------+--------------------------------------+ The new volume is available: (overcloud) [stack@undercloud-0 ~]$ cinder show 9ae49b89-fb6c-4775-804e-0b5bf916619b +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-03-09T14:10:54.000000 | | description | None | | encrypted | False | | id | 9ae49b89-fb6c-4775-804e-0b5bf916619b | | metadata | | | migration_status | success | | multiattach | False | | name | None | | os-vol-host-attr:host | controller-1@nfs#nfs | | os-vol-mig-status-attr:migstat | success | | os-vol-mig-status-attr:name_id | 9798772d-f6ea-47f4-a3ac-bdba3b37d87a | | os-vol-tenant-attr:tenant_id | f712a2330cc946cbb2a97de1c3ae4408 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2020-03-09T14:21:30.000000 | | user_id | 269e4b3bcb764befb9183ff717b71204 | | volume_type | Legacy | +--------------------------------+--------------------------------------+ We can also successfully attach migrated volume to an instance: (overcloud) [stack@undercloud-0 ~]$ nova volume-attach b60631a8-7ddf-4279-aa2c-3e84ff79086a 9ae49b89-fb6c-4775-804e-0b5bf916619b +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdc | | id | 9ae49b89-fb6c-4775-804e-0b5bf916619b | | serverId | b60631a8-7ddf-4279-aa2c-3e84ff79086a | | tag | - | | volumeId | 9ae49b89-fb6c-4775-804e-0b5bf916619b | +----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 9ae49b89-fb6c-4775-804e-0b5bf916619b | in-use | - | 1 | Legacy | false | b60631a8-7ddf-4279-aa2c-3e84ff79086a |