Verified on: openstack-cinder-11.1.0-14.el7ost.noarch Add NFS backend to existing lvm, add availability zone dc1 dc2 enabled_backend = tripleo_iscsi,nfs [tripleo_iscsi] volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver volumes_dir=/var/lib/cinder/volumes iscsi_protocol=iscsi iscsi_ip_address=172.17.3.18 volume_backend_name=tripleo_iscsi iscsi_helper=lioadm backend_availability_zone = dc1 [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_shares_config=/etc/cinder/nfs_shares.conf nfs_snapshot_support=True nas_secure_file_operations=False nas_secure_file_permissions=False backend_availability_zone = dc2 echo "10.35.160.111:/export/ins_cinder" > /etc/cinder/nfs_shares.conf systemctl restart openstack-cinder-volume.service Create types: cinder type-create lvm cinder type-create nfs cinder type-key nfs set volume_backend_name=nfs cinder type-key lvm set volume_backend_name=tripleo_iscsi cinder extra-specs-list Create volume on nfs/dc2 cinder create --display-name nfs-dc2 --volume-type nfs 1 --availability-zone dc2 cinder list +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | 80f32fb7-b566-40d7-932b-d83eb21b609e | available | nfs-dc2 | 1 | nfs | false | | Migrate to other zone (nfs to lvm) cinder retype 80f32fb7-b566-40d7-932b-d83eb21b609e lvm --migration-policy on-demand During retype operation cinder list +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | 80f32fb7-b566-40d7-932b-d83eb21b609e | retyping | nfs-dc2 | 1 | nfs | false | | | b96926aa-6a99-4f11-851f-7d9b9c1fee40 | available | nfs-dc2 | 1 | lvm | false | | Once operation completed we can see volume was retyped/migrated to other AZ. cinder list +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | 80f32fb7-b566-40d7-932b-d83eb21b609e | available | nfs-dc2 | 1 | lvm | false | | With cinder show we can see cinder show 80f32fb7-b566-40d7-932b-d83eb21b609e +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | dc1 ----> changed from dc2 to 1 | .. | | id | 80f32fb7-b566-40d7-932b-d83eb21b609e | .. | | migration_status | success | .. | | name | nfs-dc2 .. | status | available Works as expected. Retyped back to NFS cinder retype 80f32fb7-b566-40d7-932b-d83eb21b609e nfs --migration-policy on-demand Again with Cinder show we witness that volume moved back to nfs/dc2 and available cinder show 80f32fb7-b566-40d7-932b-d83eb21b609e +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | dc2 -> back at dc2 | | | id | 80f32fb7-b566-40d7-932b-d83eb21b609e | | metadata | | | migration_status | success | | multiattach | False | | name | nfs-dc2 | | os-vol-host-attr:host | hostgroup@nfs#nfs | | os-vol-mig-status-attr:migstat | success | | os-vol-mig-status-attr:name_id | None | | | status | available Retested with a none empty volume worked as well.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2516