Bug 1267953 - [RFE][cinder] Efficient volume copy for volume migration
[RFE][cinder] Efficient volume copy for volume migration
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
medium Severity medium
: beta
: 8.0 (Liberty)
Assigned To: Eric Harney
lkuchlan
https://blueprints.launchpad.net/cind...
: FutureFeature, OtherQA
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-01 08:50 EDT by Sean Cohen
Modified: 2017-06-14 02:16 EDT (History)
4 users (show)

See Also:
Fixed In Version: openstack-cinder-7.0.0-2.el7ost
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-04-07 17:10:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 183701 None None None Never

  None (edit)
Description Sean Cohen 2015-10-01 08:50:52 EDT
Currently Cinder uses dd command for data copy of volume migration,
but the copy always copy full blocks even if the source data contains
many null and zero blocks. The dd command has an option conv=sparse
to skip null or zero blocks for more efficient data copy.

However, if the destination volume is not zero cleared beforehand,
we should copy full block from source to dest volume to cleanup dest
volume in order to avoid security issue.
If the volume pre-initilization(zero cleared) is ensured beforehand,
we can skip copy of null and zero blocks to destination volume by
using sparse copy.
Comment 2 Sean Cohen 2015-10-01 08:52:59 EDT
If we create a volume from thin-provisioning pool, volume blocks are not pre-allocated, and the volume blocks will be allocated on demand. In this situation, if we migrate detached volume using dd command, dd will copy full block from source to destination volume even if the source volume contains many null or zero blocks. As a result, usage of the destination volume will be always 100%. Here is an example volume migration using thin LVM driver.

    Before migration

LV            VG    Attr       LSize   Pool     Origin Data%
vg1-pool      vg1   twi-a-tz--   3.80g                10.28
volume-1234   vg1   Vwi-a-tz--   1.00g vg1-pool       19.53

    After migration without conv=sparse option

LV            VG    Attr       LSize   Pool     Origin Data%
vg2-pool      vg2   twi-a-tz--   3.80g                31.45
volume-1234   vg2   Vwi-a-tz--   1.00g vg2-pool      100.00


Using sparse copy is able to reduce volume usage of destination storage array compared to using full block copy.
Comment 5 lkuchlan 2016-02-16 05:33:14 EST
Tested using:
python-cinderclient-1.5.0-1.el7ost.noarch
openstack-cinder-7.0.1-6.el7ost.noarch
python-cinder-7.0.1-6.el7ost.noarch

Verification steps:

[stack@instack ~]$ cinder retype 08d4a946-9a8b-4414-891a-234a0d8c579c lvm2 --migration-policy on-demand
[stack@instack ~]$ cinder list
+--------------------------------------+-----------+---------------------------------------------+------+------+-------------+----------+-------------+-------------+
|                  ID                  |   Status  |               Migration Status              | Name | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+---------------------------------------------+------+------+-------------+----------+-------------+-------------+
| 08d4a946-9a8b-4414-891a-234a0d8c579c |  retyping |                  migrating                  |  -   |  1   |     lvm1    |   true   |    False    |             |
| a58dda5a-2d27-427d-b803-7bef022531b4 | available | target:08d4a946-9a8b-4414-891a-234a0d8c579c |  -   |  1   |     lvm2    |   true   |    False    |             |
+--------------------------------------+-----------+---------------------------------------------+------+------+-------------+----------+-------------+-------------+
[stack@instack ~]$ cinder list
+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+
|                  ID                  |   Status  | Migration Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+
| 08d4a946-9a8b-4414-891a-234a0d8c579c | available |     success      |  -   |  1   |     lvm2    |   true   |    False    |             |
+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+


Results:

LV                                          VG              Attr       LSize Pool                Origin Data%  Meta%  Move Log Cpy%Sync Convert
  cinder-volumes-pool                         cinder-volumes  twi-aotz-- 4.64g                            0.83   0.88                            
  volume-08d4a946-9a8b-4414-891a-234a0d8c579c cinder-volumes  Vwi-a-tz-- 1.00g cinder-volumes-pool        3.83                                   
  cinder-volumes2-pool                        cinder-volumes2 twi-aotz-- 9.50g                            0.00   0.59                            

[root@overcloud-controller-0 ~]# lvs
  LV                                          VG              Attr       LSize Pool                 Origin Data%  Meta%  Move Log Cpy%Sync Convert
  cinder-volumes-pool                         cinder-volumes  twi-aotz-- 4.64g                             0.00   0.63                            
  cinder-volumes2-pool                        cinder-volumes2 twi-aotz-- 9.50g                             0.19   0.68                            
  volume-08d4a946-9a8b-4414-891a-234a0d8c579c cinder-volumes2 Vwi-a-tz-- 1.00g cinder-volumes2-pool        1.76
Comment 7 errata-xmlrpc 2016-04-07 17:10:24 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-0603.html

Note You need to log in before you can comment on or make changes to this bug.