Currently Cinder uses dd command for data copy of volume migration, but the copy always copy full blocks even if the source data contains many null and zero blocks. The dd command has an option conv=sparse to skip null or zero blocks for more efficient data copy. However, if the destination volume is not zero cleared beforehand, we should copy full block from source to dest volume to cleanup dest volume in order to avoid security issue. If the volume pre-initilization(zero cleared) is ensured beforehand, we can skip copy of null and zero blocks to destination volume by using sparse copy.
If we create a volume from thin-provisioning pool, volume blocks are not pre-allocated, and the volume blocks will be allocated on demand. In this situation, if we migrate detached volume using dd command, dd will copy full block from source to destination volume even if the source volume contains many null or zero blocks. As a result, usage of the destination volume will be always 100%. Here is an example volume migration using thin LVM driver. Before migration LV VG Attr LSize Pool Origin Data% vg1-pool vg1 twi-a-tz-- 3.80g 10.28 volume-1234 vg1 Vwi-a-tz-- 1.00g vg1-pool 19.53 After migration without conv=sparse option LV VG Attr LSize Pool Origin Data% vg2-pool vg2 twi-a-tz-- 3.80g 31.45 volume-1234 vg2 Vwi-a-tz-- 1.00g vg2-pool 100.00 Using sparse copy is able to reduce volume usage of destination storage array compared to using full block copy.
Feature spec: http://specs.openstack.org/openstack/cinder-specs/specs/liberty/efficient-volume-copy-for-volume-migration.html
Tested using: python-cinderclient-1.5.0-1.el7ost.noarch openstack-cinder-7.0.1-6.el7ost.noarch python-cinder-7.0.1-6.el7ost.noarch Verification steps: [stack@instack ~]$ cinder retype 08d4a946-9a8b-4414-891a-234a0d8c579c lvm2 --migration-policy on-demand [stack@instack ~]$ cinder list +--------------------------------------+-----------+---------------------------------------------+------+------+-------------+----------+-------------+-------------+ | ID | Status | Migration Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to | +--------------------------------------+-----------+---------------------------------------------+------+------+-------------+----------+-------------+-------------+ | 08d4a946-9a8b-4414-891a-234a0d8c579c | retyping | migrating | - | 1 | lvm1 | true | False | | | a58dda5a-2d27-427d-b803-7bef022531b4 | available | target:08d4a946-9a8b-4414-891a-234a0d8c579c | - | 1 | lvm2 | true | False | | +--------------------------------------+-----------+---------------------------------------------+------+------+-------------+----------+-------------+-------------+ [stack@instack ~]$ cinder list +--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+ | ID | Status | Migration Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to | +--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+ | 08d4a946-9a8b-4414-891a-234a0d8c579c | available | success | - | 1 | lvm2 | true | False | | +--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+ Results: LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert cinder-volumes-pool cinder-volumes twi-aotz-- 4.64g 0.83 0.88 volume-08d4a946-9a8b-4414-891a-234a0d8c579c cinder-volumes Vwi-a-tz-- 1.00g cinder-volumes-pool 3.83 cinder-volumes2-pool cinder-volumes2 twi-aotz-- 9.50g 0.00 0.59 [root@overcloud-controller-0 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert cinder-volumes-pool cinder-volumes twi-aotz-- 4.64g 0.00 0.63 cinder-volumes2-pool cinder-volumes2 twi-aotz-- 9.50g 0.19 0.68 volume-08d4a946-9a8b-4414-891a-234a0d8c579c cinder-volumes2 Vwi-a-tz-- 1.00g cinder-volumes2-pool 1.76
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-0603.html