Red Hat Bugzilla – Bug 1278074
RFE: please provide a way to specify maximum number of extents taken from PVs in lvcreate
Last modified: 2018-02-12 14:26:20 EST
Description of problem:
In Blivet and the Anaconda installer we need to calculate how many extents at most should be allocated on particular PVs for particular LVs. And while 'lvcreate' right now supports specifying ranges of extents that should be used together with PVs it doesn't support the simpler thing of specifying how many (at most) extents should be taken from each PV.
We had a discussion on IRC and the final suggestion was to add support for something like this:
lvcreate -L10G someVG /dev/sda:<500 /dev/sdb:<100
meaning "for this LV allocate no more than 500 extents from /dev/sda and no more than 100 extents from /dev/sdb". That's a sufficient solution for blivet/anaconda and should be a feasible solution for lvm2.
This request is a crucial step towards support of LVM RAID and in general non-linear LVs in blivet and anaconda.
In some way we already do support this kind of 'limits' e.g.:
is exactly what allocator can use - i.e. if you create a map of 'free' extents you pass in what allocated will be able to take and will stop when it fails to find space.
The main issue here is - there is possibly misunderstanding of lvm2 goal and Blivet goal.
Probably needs wider discussion - but this Blivet/Anaconda requirement looks more like it wants to 'overrule' what lvm2 as volume manager does.
The purpose of lvm2 is - to make decision according to available space - which could be used in various ways according to configured policy.
On the other side there is Blivet/Anaconda with assumption all 'space' is equal and is needed purely for user-visible volumes - but many targets do require also user-invisible i.e. metadata,spare volumes.
I assume we need wider discussion first about the goals what user should actually be doing and how the communication with lvm2 should look-like.
IMHO Blivet cannot try to 'play on' lvm2 and 'pretend' it knows what happens when complex device stack is going to be created - it's illusionary to create stack of 10 devices ahead with sector precision and having there 'Processed' button and expect it will happen exactly this way.
(Lvm2 now supports more complex targets then plain & simple linear/stripe LVs - for those you can mostly do such calculation...)
1) Determine completely suitable command line syntax extensions;
2) Add code to parse these command line extensions;
3) Determine how to encode the new constraints to pass to the allocation code;
4) Maintain the remaining quotas available for each device as space is selected within the allocation code;
5) At each point potential space on a device is considered, if its length exceeds the remaining quota for that device, reduce it to fit.
Command extensions to consider might specify absolute maximum allocations for each device, or might specify relative requirements (50% from disk A and 50% from disk B).
After some more discussion, it seems it's targeting this use-case:
pvchage --addtag slow pv1 pv2 pv3
pvchage --addtag fast pv4 pv5
lvcreate -L10 vg @slow
lvcreate -L20 vg @fast
Somehow we are not quite good at advertising this capability in man pages.
> pvchage --addtag fast pv4 pv5