Stratis would like to allow the user to separately allocate disks as being hot spares. Currently LVM will use unused PV space to fix degraded RAIDs if raid fault policy is "allocate". The concern with this is that unused PV space could be used for other purposes in the meantime, leaving insufficient space to activate a hot spare, whereas dedicated hotspare resources avoids this. ZFS also supports hot spare devices being usable by multiple pools, which would also be nice to have.
Andy, yes, we don't have hot spare support as yet and it was intentional by the time because lvm can dynamically allocate. To help your point there's options springing to mind to mind (at the cost of dropping "allocate" policy based automatic repair): - "pvchange -xn $PVs" PVs in order to avoid allocations; "pvchange -xy $PVs" before manual repair providing the list of PVs to "lvconvert --repair ..." - reserve respective spare space together with a RaidLV by creating parity disk amount of linear LVs on separate PVs to prevent allocation by others (no "pvchange -x y/n ..." in this case; those would be removed before manual repair and their PVs passed into "lvconvert --repair..."
This bug appears to have been reported against 'rawhide' during the Fedora 26 development cycle. Changing version to '26'.
This message is a reminder that Fedora 26 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 26. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '26'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 26 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
LVM2 - unlink 'traditional' tools like 'mdadm' is maintaining whole disk space and can use individual extents. Thus with i.e. 3 PVs - there can be some portion of disk used for RAID1 - while other for RAID5 - depending on users' needs. So dedicating a whole PV for a 'spare' device is not a proper plan. For thin-pool/cache-pool recovery lvm2 ATM maintains 1 single LV named _pmspare - so a single recovery operation shell always proceed. It's a compromise between used space that's supposedly 'wasted' unless something really bad happens. For raid possibly some similar strategy with certain properties can be deployed. However since single VG can have number of different raid with individual raid levels - having a discrete spare device for each of them can be challenging...