Bug 2006946
| Summary: | [RFE] need a way to faithfully restore volume groups to clean disks | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Pavel Cahyna <pcahyna> | |
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> | |
| lvm2 sub component: | Command-line tools | QA Contact: | cluster-qe <cluster-qe> | |
| Status: | CLOSED DEFERRED | Docs Contact: | ||
| Severity: | unspecified | |||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, teigland, thornber, zkabelac | |
| Version: | 8.4 | Keywords: | FutureFeature | |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
|
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2037415 (view as bug list) | Environment: | ||
| Last Closed: | 2022-01-05 15:48:38 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2037415 | |||
|
Description
Pavel Cahyna
2021-09-22 17:17:30 UTC
There are a number of cases that require something similar. There are three basic parts of a solution: 1. A format/syntax to describe an lvm layout (or storage more broadly.) 2. A way to generate this description from looking at the existing system state. 3. A way to create the lvm/storage layout from the description. For lvm, and I'm guessing others, 3 would involve running a series of ordinary create/change/convert commands, and not trying to use metadata backup restore. There are some efforts under way on parts of this which should be connected to this RFE. We have original discussion with Pavel - main idea is enhancing 'vgcfgrestore' with options not just to 'restore' lvm2 metadata and the drive where original lives on to also be usable on devices which are 'completely' new. ReaR is used for restoring storage. User may want to 'restore' setup on the same sized set of 'drives' or possibly on similarly 'sized' set of devices, but with different layout. This normally works for 'metadata-less' devices like 'linears/striped' - but as soon as LV uses some sort of metadata - other tools are 'lost' - and usually try to badly reimplement all the 'lvm2' wisdom. So we can provide several relatively 'simple' to add extension to our existing tooling. So we can provide a more simple mechanism to restore i.e. 'thin-pool with empty 10 thin LVs' with matching names & uuid. Or having some layout for cached/mirrored LV. As for comment 2 - current 'ReaR' tool approach was partially going in direction of 'recreating' LVs - however this is technically very hard - and actually lvm2 even doesn't provide mechanism how to simply copy all 'attributes' of LV. Adding mechanism into lvm2 codebase to 'setup' LV metadata into 'wanted' state is more approachable solution - since it keeps the lvm2 knowledge within out codebase - and ReaR can use 'high-level' abstraction without require to implement lvm2 knowledge in this tool. So in our discussion with Pavel - I've been having some question - what is the meaning of 'snapshots' during such restore - as clearly these are things which do not have much 'logical' sense behind restoring them - if there are no 'real' data behind it. Also 'random' caching may be also a questionable task. So ATM we may need 2 extending options (ATM with just 'randomly picked descriptive names): --initilize-metadada - ensuring i.e. thin-pool knows about thinLVs --prune-useless-lvs - dropping snapshots/ caches ? Lvm2 should be capable to 'transfer' set LVs even on different PVs - but this might be currently 'too hard' (involving actually going 1-by-1 creation step and transferring attributes ) - so the focus should go on restoring on same sized devices - which relates to existing customer cases. Hello @zkabelac , thanks for the review. I have two questions: > --prune-useless-lvs - dropping snapshots/ caches ? I understand snapshots, but why are caches a problem? I think it is perfectly valid to recreate caches with the same attributes but empty (with no cached data in them). > Lvm2 should be capable to 'transfer' set LVs even on different PVs - but > this might be currently 'too hard' (involving actually going 1-by-1 > creation step and transferring attributes ) > - so the focus should go on restoring on same sized devices - which relates > to existing customer cases. Would it be hard to extend it to same sized or bigger devices? (I understand that restoring to smaller devices would be difficult.) |