Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be unavailable on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1420949 - [RFE] Support ZFS-style hot spares
Summary: [RFE] Support ZFS-style hot spares
Status: NEW
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: unspecified
Hardware: Unspecified
OS: Unspecified
Target Milestone: Fedora
: ---
Assignee: Heinz Mauelshagen
QA Contact: Fedora Extras Quality Assurance
Depends On:
Blocks: stratis-lvm-reqs
TreeView+ depends on / blocked
Reported: 2017-02-09 23:23 UTC by Andy Grover
Modified: 2018-11-15 23:41 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed:
rule-engine: lvm-technical-solution?
rule-engine: lvm-test-coverage?

Attachments (Terms of Use)

Description Andy Grover 2017-02-09 23:23:03 UTC
Stratis would like to allow the user to separately allocate disks as being hot spares. Currently LVM will use unused PV space to fix degraded RAIDs if raid fault policy is "allocate". The concern with this is that unused PV space could be used for other purposes in the meantime, leaving insufficient space to activate a hot spare, whereas dedicated hotspare resources avoids this.

ZFS also supports hot spare devices being usable by multiple pools, which would also be nice to have.

Comment 1 Heinz Mauelshagen 2017-02-10 14:46:14 UTC

yes, we don't have hot spare support as yet and it was intentional by the time because lvm can dynamically allocate.

To help your point there's options springing to mind to mind
(at the cost of dropping "allocate" policy based automatic repair):

- "pvchange -xn $PVs" PVs in order to avoid allocations;
  "pvchange -xy $PVs" before manual repair providing the list of PVs
  to "lvconvert --repair ..."

- reserve respective spare space together with a RaidLV by creating parity
  disk amount of linear LVs on separate PVs to prevent allocation by others
  (no "pvchange -x y/n ..." in this case;  those would be removed before
  manual repair and their PVs passed into "lvconvert --repair..."

Comment 2 Fedora End Of Life 2017-02-28 11:14:28 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 26 development cycle.
Changing version to '26'.

Comment 3 Fedora End Of Life 2018-05-03 08:03:14 UTC
This message is a reminder that Fedora 26 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 26. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '26'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 26 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 4 Zdenek Kabelac 2018-05-03 08:48:09 UTC
LVM2   - unlink 'traditional' tools like 'mdadm'  is maintaining whole disk space and can use individual extents.

Thus with i.e. 3 PVs - there can be some portion of disk used for RAID1 - while other for RAID5 - depending on users' needs.

So dedicating  a whole PV for a 'spare' device is not a proper plan.

For  thin-pool/cache-pool recovery lvm2 ATM maintains 1 single LV named  _pmspare - so a single recovery operation shell always proceed.

It's a compromise between used space that's supposedly 'wasted' unless something really bad happens.

For raid possibly some similar strategy with certain properties can be deployed.

However since single VG can have number of different raid with individual raid levels - having a discrete spare device for each of them can be challenging...

Note You need to log in before you can comment on or make changes to this bug.