Hide Forgot
Part 1 included: conversions between any of: striped, raid0, raid0_meta, raid4 conversions between any of: linear, raid1, mirror Part 2 will finish-off the remaining combinations.
Related upstream commits: baba3f8 lvconvert: add conversion from/to raid10 a4bbaa3 lvconvert: add segtypes raid6_{ls,rs,la,ra}_6 and conversions to/from it 3673ce4 lvconvert: add segtype raid6_n_6 and conversions to/from it 60ddd05 lvconvert: add segtype raid5_n and conversions to/from it
LV types to create for RAID testing (with lvresize/stripes/stripe size variations): linear mirror raid1 striped raid0 raid0_meta raid4 raid5 raid5_n raid5_ls raid5_rs raid5_la raid5_ra raid6 raid6_zr raid6_nr raid6_nc raid6_n_6 raid6_ls_6 raid6_rs_6 raid6_la_6 raid6_ra_6 raid10 raid10_near LV types to convert to/from (takeover conversions): linear <-> raid1 linear <-> mirror mirror <-> raid1 striped <-> raid0 striped <-> raid0_meta striped <-> raid4 striped <-> raid5 (i.e raid5_n) striped <-> raid6 (i.e raid6_n_6) striped <-> raid10 (i.e. raid10_near) raid0 <-> raid0_meta raid0 <-> raid4 raid0 <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n) raid0_meta <-> raid6 (i.e. raid6_n_6; use --type raid6/raid6_n_n) raid0 <-> raid10 (i.e. raid10_near; use --type raid10/raid10_near) raid0_meta <-> raid4 raid0_meta <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n) raid0_meta <-> raid5 (i.e. raid5_n; use --type raid6/raid6_n_n) raid0_meta <-> raid10 (i.e. raid10_near) raid4 <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n) raid5_n <-> raid6 raid5_ls <-> raid6_ls_6 raid5_rs <-> raid6_rs_6 raid5_ra <-> raid6_ra_6 raid5_la <-> raid6_la_6 Test other raid types but those actually fail converting, e.g. from striped (> 1 leg) -> raid1 or raid5 <-> raid10 fail. LV types to convert to/from (reshape layout variations): raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_n into each other raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6 into each other LV types to convert (stripesize variations, i.e. "lvconvert --stripesize N $RaidLV"): raid4, raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_6, raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6 LV types to convert (stripes variations, i.e. "lvconvert --stripes N $RaidLV"): raid4, raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_6, raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6 (Removal of stripes needs all previous stripes during the reshape to retrieve their data and a second "lvconvert $RaidLV" call to remove them after the reshape has finished freeing them) Convert striped/raid0,raid0_meta into radi5/raid6, change stripesize, convert back Test region size changes on conversion to raid1/raid4/5/6/10. Test region size changes on raid1/raid4/5/6/10 without level conversion. Test failing legs during conversion (up takeover from lower to higher raid level): - fail an/the addional leg(s) on up conversion from lower raid levels -> no data loss - failing the previous legs -> potential data loss: o striped/raid0/raid0_meta -> data loss o raid4/raid5(_n) -> raid6(_n_6) and one previous leg failed -> no data loss o new leg fails -> no data loss; test transiewnt failure; test permanent failure (vgreduce --removem -f $vg; lvconvert down to the previous layout shall succeed) Test failing legs during conversion (down takeover from higher to lower raid level): - any raid level specific transient/permanent failure test apply after conversion - if data legs fail during conversion but the removed ones -> data loss in case > remaining parity devices; e.g. raid5_n -> striped (last dedicated parity leg removed and any of the remaining fail); e.g. raid6_n_6 -> raid5_n (last dedicated Q-Syndrome leg removed and one remaing failing -> no data loss
lvm2 upstream commit 34caf8317243 and its prerequisites prvide LV types and conversions as of comment #4 (and as of the subset requested in the initial description)
Heinz: I did a little editing of the feature description here (and gave it a title) for the release notes. Does that look ok to you? Also, I think we only need one release note description for both this BZ and for BZ#1366296 (with the description referencing both BZ numbers). Would that be ok? Steven
Adding info to doc text noting that RAID takeover was previously available as Tech Preview.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2222