Bug 709492 - [RFE] Need an LVM tool capable of striping an existing LV
[RFE] Need an LVM tool capable of striping an existing LV
Status: POST
Product: Fedora
Classification: Fedora
Component: lvm2 (Show other bugs)
All Linux
unspecified Severity medium
: ---
: ---
Assigned To: LVM and device-mapper development team
Fedora Extras Quality Assurance
Heinz Mauelshagen
: FutureFeature
Depends On:
Blocks: 1394039
  Show dependency treegraph
Reported: 2011-05-31 15:53 EDT by Billy Crook
Modified: 2017-02-28 12:05 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Billy Crook 2011-05-31 15:53:11 EDT
Description of problem:

LVM tools do not have a clear method of making an existing, nonstriped LV into a striped LV.

Version-Release number of selected component (if applicable):
# lvm version
  LVM version:     2.02.84(2) (2011-02-09)
  Library version: 1.02.63 (2011-02-09)
  Driver version:  4.19.1
# cat /etc/*release
Fedora release 15 (Lovelock)
# uname -a
Linux hostname #1 SMP Mon May 9 20:45:15 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

How reproducible:

Steps to Reproduce:
1. lvcreate /dev/vg_hostname -n lv_test -L1G
2. lvconvert -i2 /dev/vg_hostname/lv_test
3. pvdisplay --maps
Actual results:
No change to lvm layout.

Expected results:
When you have multiple PVs in the VG, and lv_test is using X extents on one PV, command (2) should allocate X/2 extents from another PV to the LV, and begin transferring every other extent on the original PV to the new PV, or compacting it down on the original PV.  Then truncate the extents allocated to that LV on the original LV.  This will result in an LV that is equally striped across two LVs.

It would also probably be prudent for it to work both ways.  Making a striped LV back into a non-striped.

Additional info:

(02:29:22 PM) agk:   - 1 cmdline to accept stripes parameters (slight conflict to resolve with an existing arg)
(02:29:47 PM) agk:    - 2 - ability to allocate striped segments within the mirror it creates
(02:30:11 PM) agk:   - 3 - ability to merge striped segments back into the old segments after the move
Comment 1 Fedora End Of Life 2013-04-03 12:00:02 EDT
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
Comment 2 Chuck Mattern 2015-12-14 14:24:44 EST
I'm a Red Hat Solution Architect weighing in on this. I have a customer with an open support case for a similar functionality, balancing a stripe set when an additional disk is added to a VG. My customer sees this as a critical feature and I concur. I believe that the addition of this capability would get LVM closer to some much desired capabilities in ZFS, delivering that benefit across all of the filesystems that reside above while keeping the changes in a single set of code.
Comment 3 Bryn M. Reeves 2015-12-15 07:17:05 EST
You should probably look at the RAID re-shaping and takeover support that is being worked on:


This uses the MD RAID personalities in device-mapper and allows much greater flexibility in re-arranging existing arrays (either to grow, change level or to take over non-RAID devices).
Comment 4 Heinz Mauelshagen 2015-12-15 07:52:30 EST
Seconding Bryn here.

Any restriping needs to happen ACID in order to stand crashs/reboots/... during the process. That's why we'll use the MD kernel runtime which offers the required properties ITR already (aka MD takeover and reshaping), thus not reinventing any wheel.

The tradeoff incurred is though, that any existing linear/striped/mirrored/raid1 logical volume has to be converted to a raid4/5/6 logical volume aiming to restripe it and that involves allocating on more than just 2 PVs due to MD and resilience constraints (actually at least 3 PVs are mandatory, because another stripe has to be allocated for parity information or even 2 additional stripes for raid6 due to more parity information being stored at that raid level)

Billy's linear logical volume -> 2-way striped LV layout change request is not possible respecting those constraints at all, because there's only 2 PVs given.

But there's another, way more flexible option under development to actually achieve what is requested in this bz by lvm2/dm means:

LV duplication and unduplication...

"lvconvert --duplicate --stripes 2 [--stripesize N] vg_hostname/lv_test"
will set up a raid1 stack mirroring the given vg_hostname/lv_test (making it the hidden master leg) to a 2-way striped hidden LV. Once the initial synchronization is done,
"lvconvert --unduplicate --stripes 2 vg_hostname/lv_test" will leave the intended striped logical volume alone.

Either case will request more space but the latter works on just 2 PVs presuming the 2 PVs have enough free space to hold the new striped LV and a bit
of raid1 metadata (typically an extend on either PV in this example), which is less than with the MD takeover+reshape way

Way more can be achieved with duplication/unduplication but I'll limit it to the requested example for now.

Expecting this to show up in upstream next year...
Comment 5 Heinz Mauelshagen 2017-02-28 12:05:28 EST
lvm2 upstream commit 34caf8317243 and its prerequisites allow for takeover/reshaping combinations _presuming_  a 3rd PV can be (temporarily) used.
The command sequence converts the linear LV to 2-legged raid1, then to raid5, then restripes to 2 data stripes followed by converting to striped ending with 2 data stripes:

# lvconvert -y -m1 $lv
# lvconvert -y --ty raid5 $lv
# lvconvert -y -f --stripes 2 $lv (3 PVs needed) # wait until reshape finishes
# lvconvert -ty striped $lv

Dupliction support will be added later...

Note You need to log in before you can comment on or make changes to this bug.