Bug 1403390 - [RFE] lvm raid: support replace without losing resilience
Summary: [RFE] lvm raid: support replace without losing resilience
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.0
Hardware: Unspecified
OS: Unspecified
low
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-09 22:42 UTC by Heinz Mauelshagen
Modified: 2021-09-07 11:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-01 07:27:47 UTC
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)

Description Heinz Mauelshagen 2016-12-09 22:42:38 UTC
Description of problem:
"lvconvert --replace PV RaidLV" allocates rmeta/rimage SubLVs on available free PVs not hlding any SubLVs of RaidLV, then removes SuBLVs on the replacement PV and resynchronizes the new ones.

This procedure reduces redundancy unnecessarily.
FWIW: pvmove can be used to work around the loss of redundancy.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. lvconvert --replace PV RaidLV
2.
3.

Actual results:
rmeta/rimage SubLVs on PVs are removed after new ones could be allocated on free PVs, then the new rimage is being reconstructed.

Expected results:
The current rmeta/rimage SubLVs are kept up-to-date during resynchronization
of the new ones, thus ensuring given resilience during the process.
After sync has finished, they can be dropped.

Additional info:

Comment 4 RHEL Program Management 2020-12-01 07:27:47 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.