Bug 732449 - RFE: LVM RAID - Support RAID1 up-convert
RFE: LVM RAID - Support RAID1 up-convert
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.2
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jonathan Earl Brassow
Corey Marthaler
: FutureFeature
Depends On:
Blocks: 732458 749672 756082
  Show dependency treegraph
 
Reported: 2011-08-22 10:02 EDT by Jonathan Earl Brassow
Modified: 2012-08-27 10:54 EDT (History)
9 users (show)

See Also:
Fixed In Version: lvm2-2.02.95-1.el6
Doc Type: Enhancement
Doc Text:
New Feature to 6.3. No documentation required. Bug 732458 is the bug that requires a release note for the RAID features. Other documentation is found in the LVM manual. Operational bugs need no documentation because they are being fixed before their initial release.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-20 10:52:06 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2011-08-22 10:02:58 EDT
Support the ability to up-convert RAID1 logical volumes, including:

1) linear -> n-way
2) m-way -> n-way
3) "mirror" segtype -> "raid1" segtype

'n' is not yet defined to my knowledge.  Perhaps the maximum number of "raid1" images should be the same as for "mirror" images.  I believe that is 8.
Comment 2 Jonathan Earl Brassow 2011-08-22 10:19:24 EDT
Release criteria (test requirements):

1) Linear -> n-way
RAID logical volumes are composed of metadata and data device pairs.  When a linear device is converted to RAID1, a new metadata device must be created and associated with the original LV and it must exist on (one of) the same PV(s) that the linear volume is on.  The additional images are added in metadata/data pairs.  An example might be:
Before convert:
    my_lv    linear    /dev/sdb1
After convert:
    my_lv    raid1
    my_lv_rimage_0    linear    /dev/sdb1
    my_lv_rmeta_0     linear    /dev/sdb1
    my_lv_rimage_1    linear    /dev/sdc1
    my_lv_rmeta_1     linear    /dev/sdc1
If the metadata image that pairs with the original LV cannot be placed on the same PV, the command should fail.

2) m-way -> n-way
Similar to #1, but only the required number of additional metadata/data device pairs need to be added.  Metadata LVs (i.e. LVs named *_rmeta_*) should always exist on the same devices as their data LV counterparts.  The LVs from one metadata/data LV pair should never exist on the same devices as those from another metadata/data LV pair (unless '--alloc anywhere' were used).

3) mirror -> raid1
No new data images need to be created, but they need to be renamed (s/_mimage_/_rimage_/).  Any mirror log must be removed and metadata LVs must be created (on the same PVs as their corresponding data LVs).
Comment 3 Jonathan Earl Brassow 2011-08-22 10:52:12 EDT
Further requirements:

The format of the command (while not given above) is the same as that for the "mirror" segment type:
~> lvconvert -m <new_absolute_count> vg/lv [allocatable_PVs]
~> lvconvert -m +<num_additional_images> vg/lv [allocatable_PVs]

The user should be able to specify which PVs the new metadata/data image pairs come from.  This should be validated.
Comment 4 Corey Marthaler 2011-08-31 14:37:40 EDT
Adding QA ack for 6.3.

Devel will need to provide unit testing results however before this bug can be
ultimately verified by QA.
Comment 6 Jonathan Earl Brassow 2011-11-30 00:20:32 EST
changes committed upstream in version 2.02.89
Comment 7 Jonathan Earl Brassow 2012-01-13 17:29:48 EST
Bug is covered by LVM testsuite in lvconvert-raid.sh.

Condition #1:
[root@bp-01 LVM2]# devices vg
  LV   Copy%  Devices     
  lv          /dev/sde1(0)
[root@bp-01 LVM2]# !lvconvert
lvconvert --type raid1 -m 1 vg/lv
[root@bp-01 LVM2]# devices vg
  LV            Copy%  Devices                      
  lv              6.25 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]        /dev/sde1(0)                 
  [lv_rimage_1]        /dev/sdf1(1)                 
  [lv_rmeta_0]         /dev/sde1(256)               
  [lv_rmeta_1]         /dev/sdf1(0)                 

Condition #2:
[root@bp-01 LVM2]# devices vg
  LV            Copy%  Devices                      
  lv              6.25 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]        /dev/sde1(0)                 
  [lv_rimage_1]        /dev/sdf1(1)                 
  [lv_rmeta_0]         /dev/sde1(256)               
  [lv_rmeta_1]         /dev/sdf1(0)                 
[root@bp-01 LVM2]# lvconvert -m 2 vg/lv
[root@bp-01 LVM2]# devices vg
  LV            Copy%  Devices                                     
  lv              6.25 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(0)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(256)                              
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                

Condition #3:
[root@bp-01 LVM2]# devices vg
  LV            Copy%  Devices                      
  lv             15.20 lv_mimage_0(0),lv_mimage_1(0)
  [lv_mimage_0]        /dev/sde1(0)                 
  [lv_mimage_1]        /dev/sdf1(0)                 
  [lv_mlog]            /dev/sdd1(0)                 
[root@bp-01 LVM2]# lvconvert --type raid1 vg/lv
  Unable to convert vg/lv while it is not in-sync
[root@bp-01 LVM2]# lvconvert --type raid1 vg/lv
[root@bp-01 LVM2]# devices vg
  LV            Copy%  Devices                      
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]        /dev/sde1(0)                 
  [lv_rimage_1]        /dev/sdf1(0)                 
  [lv_rmeta_0]         /dev/sde1(125)               
  [lv_rmeta_1]         /dev/sdf1(125)
Comment 10 Corey Marthaler 2012-03-23 12:57:03 EDT
The basic raid1 upconvert cases work. Marking this feature verified in the latest rpms.

2.6.32-251.el6.x86_64

lvm2-2.02.95-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
lvm2-libs-2.02.95-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
lvm2-cluster-2.02.95-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
udev-147-2.40.el6    BUILT: Fri Sep 23 07:51:13 CDT 2011
device-mapper-1.02.74-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
device-mapper-libs-1.02.74-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
device-mapper-event-1.02.74-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
device-mapper-event-libs-1.02.74-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
cmirror-2.02.95-2.el6    BUILT: Fri Mar 16 08:39:54 CDT 2012
Comment 11 Jonathan Earl Brassow 2012-04-23 14:18:43 EDT
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
New Feature to 6.3.  No documentation required.

Bug 732458 is the bug that requires a release note for the RAID features.  Other documentation is found in the LVM manual.

Operational bugs need no documentation because they are being fixed before their initial release.
Comment 13 errata-xmlrpc 2012-06-20 10:52:06 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0962.html

Note You need to log in before you can comment on or make changes to this bug.