Bug 1366296
| Summary: | LVM RAID - Add support for raid level takeover/reshape (part 2) | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Jonathan Earl Brassow <jbrassow> |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | Steven J. Levine <slevine> |
| Severity: | unspecified | ||
| Priority: | unspecified | CC: | agk, cluster-qe, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, prockai, rbednar, slevine, zkabelac |
| Version: | 7.2 | Keywords: | FutureFeature, Tracking |
| Target Milestone: | rc | Flags: | heinzm:
needinfo-
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.169-1.el7 | Doc Type: | Enhancement |
| Doc Text: |
Support added in LVM for RAID level takeover
LVM now provides full support for RAID takeover, previously available as a Technology Preview, which allows users to convert a RAID logical volume from one RAID level to another. This release expands the number of RAID takeover combinations. Support for some transitions may require intermediate steps. New RAID types that are added by means of RAID takeover are not supported in older released kernel versions; these RAID types are raid0, raid0_meta, raid5_n, and raid6_{ls,rs,la,ra,n}_6. Users creating those RAID types or converting to those RAID types on Red Hat Enterprise Linux 7.4 cannot activate the logical volumes on systems running previous releases. RAID takeover is available only on top-level logical volumes in single machine mode (that is, takeover is not available for cluster volume groups or while the RAID is under a snapshot or part of a thin pool).
|
Story Points: | --- |
| Clone Of: | 1191630 | Environment: | |
| Last Closed: | 2017-08-01 21:47:18 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1191630 | ||
| Bug Blocks: | 834579, 1189124, 1346081, 1385242, 1394039, 1411727 | ||
|
Comment 1
Jonathan Earl Brassow
2016-08-11 14:40:12 UTC
Related upstream commits:
baba3f8 lvconvert: add conversion from/to raid10
a4bbaa3 lvconvert: add segtypes raid6_{ls,rs,la,ra}_6 and conversions to/from it
3673ce4 lvconvert: add segtype raid6_n_6 and conversions to/from it
60ddd05 lvconvert: add segtype raid5_n and conversions to/from it
LV types to create for RAID testing (with lvresize/stripes/stripe size variations):
linear
mirror
raid1
striped
raid0
raid0_meta
raid4
raid5
raid5_n
raid5_ls
raid5_rs
raid5_la
raid5_ra
raid6
raid6_zr
raid6_nr
raid6_nc
raid6_n_6
raid6_ls_6
raid6_rs_6
raid6_la_6
raid6_ra_6
raid10
raid10_near
LV types to convert to/from (takeover conversions):
linear <-> raid1
linear <-> mirror
mirror <-> raid1
striped <-> raid0
striped <-> raid0_meta
striped <-> raid4
striped <-> raid5 (i.e raid5_n)
striped <-> raid6 (i.e raid6_n_6)
striped <-> raid10 (i.e. raid10_near)
raid0 <-> raid0_meta
raid0 <-> raid4
raid0 <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n)
raid0_meta <-> raid6 (i.e. raid6_n_6; use --type raid6/raid6_n_n)
raid0 <-> raid10 (i.e. raid10_near; use --type raid10/raid10_near)
raid0_meta <-> raid4
raid0_meta <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n)
raid0_meta <-> raid5 (i.e. raid5_n; use --type raid6/raid6_n_n)
raid0_meta <-> raid10 (i.e. raid10_near)
raid4 <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n)
raid5_n <-> raid6
raid5_ls <-> raid6_ls_6
raid5_rs <-> raid6_rs_6
raid5_ra <-> raid6_ra_6
raid5_la <-> raid6_la_6
Test other raid types but those actually fail converting,
e.g. from striped (> 1 leg) -> raid1 or raid5 <-> raid10 fail.
LV types to convert to/from (reshape layout variations):
raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_n into each other
raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6 into each other
LV types to convert (stripesize variations, i.e. "lvconvert --stripesize N $RaidLV"):
raid4, raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_6,
raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6
LV types to convert (stripes variations, i.e. "lvconvert --stripes N $RaidLV"):
raid4, raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_6,
raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6
(Removal of stripes needs all previous stripes during the reshape to retrieve
their data and a second "lvconvert $RaidLV" call to remove them after the
reshape has finished freeing them)
Convert striped/raid0,raid0_meta into radi5/raid6, change stripesize, convert back
Test region size changes on conversion to raid1/raid4/5/6/10.
Test region size changes on raid1/raid4/5/6/10 without level conversion.
Test failing legs during conversion (up takeover from lower to higher raid level):
- fail an/the addional leg(s) on up conversion from lower raid levels -> no data loss
- failing the previous legs -> potential data loss:
o striped/raid0/raid0_meta -> data loss
o raid4/raid5(_n) -> raid6(_n_6) and one previous leg failed -> no data loss
o new leg fails -> no data loss; test transiewnt failure; test permanent failure
(vgreduce --removem -f $vg; lvconvert down to the previous layout shall succeed)
Test failing legs during conversion (down takeover from higher to lower raid level):
- any raid level specific transient/permanent failure test apply after conversion
- if data legs fail during conversion but the removed ones -> data loss in case > remaining parity devices;
e.g. raid5_n -> striped (last dedicated parity leg removed and any of the remaining fail);
e.g. raid6_n_6 -> raid5_n (last dedicated Q-Syndrome leg removed and one remaing failing -> no data loss
lvm2 upstream commit 34caf8317243 and its prerequisites prvide LV types and conversions as of comment #4 (and as of the subset requested in the initial description) Heinz: I did a little editing of the feature description here (and gave it a title) for the release notes. Does that look ok to you? Also, I think we only need one release note description for both this BZ and for BZ#1366296 (with the description referencing both BZ numbers). Would that be ok? Steven Adding info to doc text noting that RAID takeover was previously available as Tech Preview. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2222 |