Bug 1403390

Summary: [RFE] lvm raid: support replace without losing resilience
Product: Red Hat Enterprise Linux 8 Reporter: Heinz Mauelshagen <heinzm>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED WONTFIX Docs Contact:
Severity: unspecified    
Priority: low CC: agk, heinzm, jbrassow, msnitzer, pasik, prajnoha, zkabelac
Version: 8.0Keywords: FutureFeature, Improvement
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-01 07:27:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Heinz Mauelshagen 2016-12-09 22:42:38 UTC
Description of problem:
"lvconvert --replace PV RaidLV" allocates rmeta/rimage SubLVs on available free PVs not hlding any SubLVs of RaidLV, then removes SuBLVs on the replacement PV and resynchronizes the new ones.

This procedure reduces redundancy unnecessarily.
FWIW: pvmove can be used to work around the loss of redundancy.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. lvconvert --replace PV RaidLV
2.
3.

Actual results:
rmeta/rimage SubLVs on PVs are removed after new ones could be allocated on free PVs, then the new rimage is being reconstructed.

Expected results:
The current rmeta/rimage SubLVs are kept up-to-date during resynchronization
of the new ones, thus ensuring given resilience during the process.
After sync has finished, they can be dropped.

Additional info:

Comment 4 RHEL Program Management 2020-12-01 07:27:47 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.